Jan 24 03:41:41 crc systemd[1]: Starting Kubernetes Kubelet... Jan 24 03:41:41 crc restorecon[4677]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 03:41:41 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 03:41:42 crc restorecon[4677]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 24 03:41:43 crc kubenswrapper[4772]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.434233 4772 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439856 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439890 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439900 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439908 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439921 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439931 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439942 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439953 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439961 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439970 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439978 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439986 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.439995 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440003 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440011 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440022 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440033 4772 feature_gate.go:330] unrecognized feature gate: Example Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440042 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440050 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440098 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440107 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440116 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440125 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440133 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440141 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440150 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440158 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440166 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440174 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440182 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440190 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440197 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440208 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440216 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440224 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440231 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440239 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440247 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440255 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440263 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440270 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440278 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440285 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440293 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440301 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440308 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440318 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440328 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440340 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440351 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440361 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440369 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440377 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440386 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440393 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440400 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440408 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440415 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440423 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440432 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440440 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440448 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440456 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440464 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440473 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440484 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440492 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440502 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440510 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440518 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.440526 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.440961 4772 flags.go:64] FLAG: --address="0.0.0.0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.440984 4772 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441001 4772 flags.go:64] FLAG: --anonymous-auth="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441013 4772 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441025 4772 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441035 4772 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441047 4772 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441058 4772 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441068 4772 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441077 4772 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441088 4772 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441097 4772 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441106 4772 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441116 4772 flags.go:64] FLAG: --cgroup-root="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441124 4772 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441133 4772 flags.go:64] FLAG: --client-ca-file="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441142 4772 flags.go:64] FLAG: --cloud-config="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441150 4772 flags.go:64] FLAG: --cloud-provider="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441159 4772 flags.go:64] FLAG: --cluster-dns="[]" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441342 4772 flags.go:64] FLAG: --cluster-domain="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441352 4772 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441362 4772 flags.go:64] FLAG: --config-dir="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441370 4772 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441380 4772 flags.go:64] FLAG: --container-log-max-files="5" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441392 4772 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441402 4772 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441411 4772 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441421 4772 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441430 4772 flags.go:64] FLAG: --contention-profiling="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441440 4772 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441449 4772 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441459 4772 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441473 4772 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441487 4772 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441496 4772 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441508 4772 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441520 4772 flags.go:64] FLAG: --enable-load-reader="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441532 4772 flags.go:64] FLAG: --enable-server="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441544 4772 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441560 4772 flags.go:64] FLAG: --event-burst="100" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441572 4772 flags.go:64] FLAG: --event-qps="50" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441584 4772 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441596 4772 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441606 4772 flags.go:64] FLAG: --eviction-hard="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441621 4772 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441634 4772 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441645 4772 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441655 4772 flags.go:64] FLAG: --eviction-soft="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441664 4772 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441674 4772 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441683 4772 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441692 4772 flags.go:64] FLAG: --experimental-mounter-path="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441702 4772 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441712 4772 flags.go:64] FLAG: --fail-swap-on="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441722 4772 flags.go:64] FLAG: --feature-gates="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441734 4772 flags.go:64] FLAG: --file-check-frequency="20s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441771 4772 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441783 4772 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441793 4772 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441803 4772 flags.go:64] FLAG: --healthz-port="10248" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441813 4772 flags.go:64] FLAG: --help="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441822 4772 flags.go:64] FLAG: --hostname-override="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441831 4772 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441840 4772 flags.go:64] FLAG: --http-check-frequency="20s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441850 4772 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441859 4772 flags.go:64] FLAG: --image-credential-provider-config="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441870 4772 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441879 4772 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441890 4772 flags.go:64] FLAG: --image-service-endpoint="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441899 4772 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441908 4772 flags.go:64] FLAG: --kube-api-burst="100" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441917 4772 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441926 4772 flags.go:64] FLAG: --kube-api-qps="50" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441935 4772 flags.go:64] FLAG: --kube-reserved="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441944 4772 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441953 4772 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441962 4772 flags.go:64] FLAG: --kubelet-cgroups="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441971 4772 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441980 4772 flags.go:64] FLAG: --lock-file="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441988 4772 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.441997 4772 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442008 4772 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442026 4772 flags.go:64] FLAG: --log-json-split-stream="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442037 4772 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442048 4772 flags.go:64] FLAG: --log-text-split-stream="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442060 4772 flags.go:64] FLAG: --logging-format="text" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442069 4772 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442079 4772 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442088 4772 flags.go:64] FLAG: --manifest-url="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442096 4772 flags.go:64] FLAG: --manifest-url-header="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442108 4772 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442118 4772 flags.go:64] FLAG: --max-open-files="1000000" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442130 4772 flags.go:64] FLAG: --max-pods="110" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442139 4772 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442148 4772 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442157 4772 flags.go:64] FLAG: --memory-manager-policy="None" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442166 4772 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442176 4772 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442185 4772 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442194 4772 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442214 4772 flags.go:64] FLAG: --node-status-max-images="50" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442223 4772 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442232 4772 flags.go:64] FLAG: --oom-score-adj="-999" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442241 4772 flags.go:64] FLAG: --pod-cidr="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442251 4772 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442265 4772 flags.go:64] FLAG: --pod-manifest-path="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442274 4772 flags.go:64] FLAG: --pod-max-pids="-1" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442284 4772 flags.go:64] FLAG: --pods-per-core="0" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442293 4772 flags.go:64] FLAG: --port="10250" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442302 4772 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442311 4772 flags.go:64] FLAG: --provider-id="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442320 4772 flags.go:64] FLAG: --qos-reserved="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442331 4772 flags.go:64] FLAG: --read-only-port="10255" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442340 4772 flags.go:64] FLAG: --register-node="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442349 4772 flags.go:64] FLAG: --register-schedulable="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442358 4772 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442373 4772 flags.go:64] FLAG: --registry-burst="10" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442383 4772 flags.go:64] FLAG: --registry-qps="5" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442392 4772 flags.go:64] FLAG: --reserved-cpus="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442401 4772 flags.go:64] FLAG: --reserved-memory="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442413 4772 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442422 4772 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442431 4772 flags.go:64] FLAG: --rotate-certificates="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442440 4772 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442449 4772 flags.go:64] FLAG: --runonce="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442458 4772 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442467 4772 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442476 4772 flags.go:64] FLAG: --seccomp-default="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442491 4772 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442501 4772 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442510 4772 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442519 4772 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442528 4772 flags.go:64] FLAG: --storage-driver-password="root" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442538 4772 flags.go:64] FLAG: --storage-driver-secure="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442547 4772 flags.go:64] FLAG: --storage-driver-table="stats" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442555 4772 flags.go:64] FLAG: --storage-driver-user="root" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442564 4772 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442573 4772 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442582 4772 flags.go:64] FLAG: --system-cgroups="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442591 4772 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442621 4772 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442630 4772 flags.go:64] FLAG: --tls-cert-file="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442639 4772 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442650 4772 flags.go:64] FLAG: --tls-min-version="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442661 4772 flags.go:64] FLAG: --tls-private-key-file="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442672 4772 flags.go:64] FLAG: --topology-manager-policy="none" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442683 4772 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442694 4772 flags.go:64] FLAG: --topology-manager-scope="container" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442705 4772 flags.go:64] FLAG: --v="2" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442719 4772 flags.go:64] FLAG: --version="false" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442733 4772 flags.go:64] FLAG: --vmodule="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442794 4772 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.442806 4772 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443060 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443071 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443080 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443089 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443097 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443106 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443114 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443127 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443139 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443147 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443156 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443164 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443172 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443180 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443187 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443196 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443204 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443212 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443219 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443227 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443237 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443247 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443257 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443280 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443290 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443300 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443308 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443316 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443324 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443332 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443339 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443347 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443354 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443362 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443370 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443377 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443385 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443392 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443400 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443411 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443421 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443429 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443437 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443445 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443453 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443463 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443472 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443481 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443489 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443498 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443506 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443514 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443522 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443529 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443539 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443546 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443554 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443562 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443570 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443582 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443592 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443600 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443607 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443615 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443624 4772 feature_gate.go:330] unrecognized feature gate: Example Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443632 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443639 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443647 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443655 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443663 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.443670 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.443686 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.458227 4772 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.458277 4772 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458451 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458474 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458485 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458498 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458510 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458521 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458531 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458541 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458551 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458562 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458571 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458581 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458591 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458603 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458617 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458627 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458636 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458647 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458657 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458667 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458679 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458692 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458708 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458719 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458731 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458780 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458791 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458802 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458812 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458822 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458834 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458846 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458856 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458866 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458877 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458887 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458898 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458912 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458925 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458937 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458948 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458958 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458968 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458979 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458988 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.458997 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459007 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459018 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459028 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459039 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459052 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459068 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459082 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459096 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459109 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459119 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459129 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459137 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459146 4772 feature_gate.go:330] unrecognized feature gate: Example Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459154 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459163 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459171 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459179 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459187 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459195 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459203 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459212 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459221 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459230 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459238 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459247 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.459264 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459562 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459579 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459588 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459597 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459606 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459614 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459623 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459632 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459640 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459648 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459656 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459664 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459672 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459680 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459689 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459697 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459705 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459713 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459721 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459729 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459875 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459887 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459898 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459906 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459914 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459922 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459930 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459938 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459946 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459954 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459962 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459970 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459978 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459986 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.459995 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460003 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460012 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460020 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460030 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460039 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460048 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460056 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460091 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460102 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460114 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460125 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460134 4772 feature_gate.go:330] unrecognized feature gate: Example Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460142 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460150 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460158 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460168 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460176 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460184 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460192 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460199 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460207 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460215 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460223 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460231 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460242 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460252 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460261 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460270 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460278 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460288 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460297 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460307 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460317 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460327 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460340 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.460354 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.460369 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.460697 4772 server.go:940] "Client rotation is on, will bootstrap in background" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.465602 4772 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.465793 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.467045 4772 server.go:997] "Starting client certificate rotation" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.467075 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.467334 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 13:42:17.12614755 +0000 UTC Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.467484 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.482315 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.487261 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.488806 4772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.503206 4772 log.go:25] "Validated CRI v1 runtime API" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.530460 4772 log.go:25] "Validated CRI v1 image API" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.532970 4772 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.535962 4772 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-24-03-37-47-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.536007 4772 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.566125 4772 manager.go:217] Machine: {Timestamp:2026-01-24 03:41:43.563837307 +0000 UTC m=+0.600928102 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268 BootID:d7b5535c-692f-4ec4-9db9-0dddf96ce11f Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8a:9a:41 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8a:9a:41 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cf:a1:79 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:36:65 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5d:6e:1f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:eb:6f:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:05:f5:8e:ae:a8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:82:21:d9:af:0a:66 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.566564 4772 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.567094 4772 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.567891 4772 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.568278 4772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.568356 4772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.568976 4772 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.569012 4772 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.569290 4772 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.569347 4772 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.569724 4772 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.569936 4772 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.571372 4772 kubelet.go:418] "Attempting to sync node with API server" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.571412 4772 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.571442 4772 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.571469 4772 kubelet.go:324] "Adding apiserver pod source" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.571491 4772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.574920 4772 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.575486 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.578303 4772 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.579697 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580633 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580781 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580811 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580849 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580870 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580891 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580921 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.580972 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.581004 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.581027 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.581043 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.581199 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.581248 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.581364 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.581390 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.582358 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.583199 4772 server.go:1280] "Started kubelet" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.584153 4772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.584209 4772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.584658 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.585452 4772 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 03:41:43 crc systemd[1]: Started Kubernetes Kubelet. Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.586964 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.587022 4772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.587094 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:12:26.521879755 +0000 UTC Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.587153 4772 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.587210 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.587189 4772 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.588177 4772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.587201 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d8dc4776c205e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 03:41:43.58315427 +0000 UTC m=+0.620245035,LastTimestamp:2026-01-24 03:41:43.58315427 +0000 UTC m=+0.620245035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.589138 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.589176 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.589309 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.589839 4772 factory.go:55] Registering systemd factory Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.589882 4772 factory.go:221] Registration of the systemd container factory successfully Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.590346 4772 factory.go:153] Registering CRI-O factory Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.590498 4772 factory.go:221] Registration of the crio container factory successfully Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.590399 4772 server.go:460] "Adding debug handlers to kubelet server" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.590887 4772 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.591031 4772 factory.go:103] Registering Raw factory Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.591153 4772 manager.go:1196] Started watching for new ooms in manager Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.592329 4772 manager.go:319] Starting recovery of all containers Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.608131 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.608667 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.608814 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609122 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609226 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609327 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609460 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609603 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609762 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.609983 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610100 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610189 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610288 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610379 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610456 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610566 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610650 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610789 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610890 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.610981 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611062 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611182 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611303 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611405 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611504 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611615 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611787 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.611910 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612042 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612233 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612326 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612379 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612405 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612479 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612508 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612542 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612567 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612592 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612631 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612782 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612868 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612895 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612927 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612954 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.612980 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613052 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613102 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613127 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613154 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613177 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613232 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613284 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613389 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613533 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613572 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613620 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613662 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613788 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.613820 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614401 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614454 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614553 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614711 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614805 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614875 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614923 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.614972 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.615030 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.615062 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.615128 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.615170 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.615207 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.623210 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.623326 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.623405 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.623503 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624106 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624205 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624234 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624355 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624546 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624578 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624598 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624694 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624765 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624816 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624855 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624877 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624893 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624910 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624926 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624940 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624955 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624977 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.624994 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625011 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625030 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625043 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625059 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625072 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.625113 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626140 4772 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626184 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626201 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626251 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626275 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626324 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626339 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626354 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626367 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626382 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626395 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626412 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626425 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626437 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626449 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626461 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626472 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626483 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626495 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626507 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626519 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626530 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626544 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626555 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626568 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626578 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626591 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626604 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626617 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626632 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626675 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626686 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626699 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626709 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626720 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626731 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626756 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626770 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626805 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626836 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626850 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626863 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626874 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626885 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626896 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626908 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626919 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626930 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626943 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626954 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626964 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626975 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.626986 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627000 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627012 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627024 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627038 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627051 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627063 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627075 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627088 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627099 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627111 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627124 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627137 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627149 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627161 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627174 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627186 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627198 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627212 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627224 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627241 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627255 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627270 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627284 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627297 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627310 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627321 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627334 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627347 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627358 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627369 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627381 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627393 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627405 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627418 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627430 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627442 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627455 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627467 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627479 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627491 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627502 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627515 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627529 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627550 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627561 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627576 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627587 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627600 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627614 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627630 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627645 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627659 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627672 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627686 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627696 4772 reconstruct.go:97] "Volume reconstruction finished" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.627704 4772 reconciler.go:26] "Reconciler: start to sync state" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.631813 4772 manager.go:324] Recovery completed Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.643253 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.645978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.646045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.646062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.648624 4772 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.648644 4772 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.648695 4772 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.654129 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.657519 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.657564 4772 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.657592 4772 kubelet.go:2335] "Starting kubelet main sync loop" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.657642 4772 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 03:41:43 crc kubenswrapper[4772]: W0124 03:41:43.658619 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.658671 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.663721 4772 policy_none.go:49] "None policy: Start" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.665680 4772 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.665717 4772 state_mem.go:35] "Initializing new in-memory state store" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.687821 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.736098 4772 manager.go:334] "Starting Device Plugin manager" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.736178 4772 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.736195 4772 server.go:79] "Starting device plugin registration server" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.736838 4772 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.736859 4772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.737387 4772 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.737486 4772 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.737499 4772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.750470 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.758261 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.758369 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.759777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.759879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.759923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.760221 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.760519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.760632 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.761983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762635 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762794 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.762859 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.764976 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.765246 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.765366 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766794 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.766963 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.767021 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.768669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.768725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.768793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.768725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.768899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.769357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.769651 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.769784 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.774985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.775066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.775088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.790129 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830652 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830853 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830899 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830939 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.830976 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831011 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831044 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831075 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831110 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831147 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831213 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831252 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.831284 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.837159 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.839226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.839283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.839298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.839343 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:43 crc kubenswrapper[4772]: E0124 03:41:43.840368 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.933699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.933868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.933912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.933966 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934001 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.933998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934067 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934085 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934230 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934354 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934441 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934444 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934475 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934514 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934547 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934587 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934610 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934636 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934697 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934513 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934798 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 03:41:43 crc kubenswrapper[4772]: I0124 03:41:43.934848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.040860 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.042530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.042568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.042580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.042611 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.043090 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.108773 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.119005 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.137677 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.155202 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-21bc85255cfb6497da0690b841b33574783f7eea52598baa716834c1ffbcf645 WatchSource:0}: Error finding container 21bc85255cfb6497da0690b841b33574783f7eea52598baa716834c1ffbcf645: Status 404 returned error can't find the container with id 21bc85255cfb6497da0690b841b33574783f7eea52598baa716834c1ffbcf645 Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.157062 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-afa805b984537063d33dace49b0b3110a9ebb201870b78f6419056956013edc2 WatchSource:0}: Error finding container afa805b984537063d33dace49b0b3110a9ebb201870b78f6419056956013edc2: Status 404 returned error can't find the container with id afa805b984537063d33dace49b0b3110a9ebb201870b78f6419056956013edc2 Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.160418 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.164871 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bbfff83eadd5200b11063dc427811f885b3f5262b6a61cf3b7a6e94be45e58a6 WatchSource:0}: Error finding container bbfff83eadd5200b11063dc427811f885b3f5262b6a61cf3b7a6e94be45e58a6: Status 404 returned error can't find the container with id bbfff83eadd5200b11063dc427811f885b3f5262b6a61cf3b7a6e94be45e58a6 Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.173250 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.184937 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f2a49a283ee816c72df46f4eadb84cb0467f1c8da4078819df99d7c2b291ebef WatchSource:0}: Error finding container f2a49a283ee816c72df46f4eadb84cb0467f1c8da4078819df99d7c2b291ebef: Status 404 returned error can't find the container with id f2a49a283ee816c72df46f4eadb84cb0467f1c8da4078819df99d7c2b291ebef Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.192095 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.214838 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1f94f93543cf3de9a06c8dcceb639314b0dbbd24c50855b035950d0a545dd0f6 WatchSource:0}: Error finding container 1f94f93543cf3de9a06c8dcceb639314b0dbbd24c50855b035950d0a545dd0f6: Status 404 returned error can't find the container with id 1f94f93543cf3de9a06c8dcceb639314b0dbbd24c50855b035950d0a545dd0f6 Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.386010 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.386139 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:44 crc kubenswrapper[4772]: W0124 03:41:44.420523 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.420681 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.443241 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.445196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.445246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.445260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.445295 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.445981 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.586567 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.587549 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:31:46.811649328 +0000 UTC Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.665480 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbfff83eadd5200b11063dc427811f885b3f5262b6a61cf3b7a6e94be45e58a6"} Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.668445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21bc85255cfb6497da0690b841b33574783f7eea52598baa716834c1ffbcf645"} Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.669772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"afa805b984537063d33dace49b0b3110a9ebb201870b78f6419056956013edc2"} Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.671008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f94f93543cf3de9a06c8dcceb639314b0dbbd24c50855b035950d0a545dd0f6"} Jan 24 03:41:44 crc kubenswrapper[4772]: I0124 03:41:44.672079 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2a49a283ee816c72df46f4eadb84cb0467f1c8da4078819df99d7c2b291ebef"} Jan 24 03:41:44 crc kubenswrapper[4772]: E0124 03:41:44.993732 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Jan 24 03:41:45 crc kubenswrapper[4772]: W0124 03:41:45.114253 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:45 crc kubenswrapper[4772]: E0124 03:41:45.114384 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:45 crc kubenswrapper[4772]: W0124 03:41:45.190085 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:45 crc kubenswrapper[4772]: E0124 03:41:45.190214 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.247034 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.249291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.249369 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.249389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.249444 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:45 crc kubenswrapper[4772]: E0124 03:41:45.250336 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.585920 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.588326 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:13:26.554999892 +0000 UTC Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.658692 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 03:41:45 crc kubenswrapper[4772]: E0124 03:41:45.659825 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.682273 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.682341 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.682363 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.682382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.682364 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.683618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.683645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.683656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.685602 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8" exitCode=0 Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.685677 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.685794 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.686943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.687044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.687059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.687647 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81aa70bae64b4acf33b8b3627a71810a9d3b5f90b4885bfac6ecfdb98439d956" exitCode=0 Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.687752 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81aa70bae64b4acf33b8b3627a71810a9d3b5f90b4885bfac6ecfdb98439d956"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.687951 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.688891 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.691771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.692259 4772 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ad3d17614375623e6dfab21af0998d3d09dce9db12a7f4ea4eb5d36663fbe155" exitCode=0 Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.692319 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ad3d17614375623e6dfab21af0998d3d09dce9db12a7f4ea4eb5d36663fbe155"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.692387 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.693407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.693437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.693450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.695462 4772 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9" exitCode=0 Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.695507 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9"} Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.695595 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.697313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.697347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.697358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:45 crc kubenswrapper[4772]: I0124 03:41:45.854160 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.586662 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.588656 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:28:42.905123726 +0000 UTC Jan 24 03:41:46 crc kubenswrapper[4772]: E0124 03:41:46.594850 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Jan 24 03:41:46 crc kubenswrapper[4772]: W0124 03:41:46.662270 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Jan 24 03:41:46 crc kubenswrapper[4772]: E0124 03:41:46.662399 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.709438 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"79b5fcab29514a4fb32ff31202b8e79f1006d19ae13f9e4756bf563bd47ff86d"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.709538 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.710726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.710777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.710787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.712622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.712656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.712668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.712694 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.713802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.713882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.713905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.715237 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.715274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.715292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.715307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.717781 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="947039c60f03ca684c69788d549bfd261e579b68a261df9ed5ee8936ed447c08" exitCode=0 Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.717858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"947039c60f03ca684c69788d549bfd261e579b68a261df9ed5ee8936ed447c08"} Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.717939 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.717970 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.721303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.721344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.721360 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.722268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.722332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.722366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.851520 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.853459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.853505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.853518 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.853549 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:46 crc kubenswrapper[4772]: E0124 03:41:46.854317 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Jan 24 03:41:46 crc kubenswrapper[4772]: I0124 03:41:46.997544 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.589079 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:02:13.568153384 +0000 UTC Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.723943 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad"} Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.724164 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.725297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.725324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.725335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727277 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="77b3580e1e3d1423ea9ce19c6c38ed2aad483ce5be4a9f7894ea66e0540c891b" exitCode=0 Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727331 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"77b3580e1e3d1423ea9ce19c6c38ed2aad483ce5be4a9f7894ea66e0540c891b"} Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727508 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727535 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727558 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.727877 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728059 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728931 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.729003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.728908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.729209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.729233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.730404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.730420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:47 crc kubenswrapper[4772]: I0124 03:41:47.730428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.589707 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:16:12.400804353 +0000 UTC Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.687935 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.736727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b194ee6d47e2412bc3f5b9aa13b9c70d41b09ccbf6f0a358ce7f30552deaea69"} Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.736809 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.736845 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c3acf682f48b32e2da595fba4a4b4b469e96bbd42f2dc6db57c41ad267ef81df"} Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.736873 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"219ef29e7d2f9fff6766f614c167ac0685aa2bdba82aff932c684bc54f5bab9d"} Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.737080 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.737161 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738496 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.738509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.775883 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.854303 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 03:41:48 crc kubenswrapper[4772]: I0124 03:41:48.854421 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.105297 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.105521 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.106885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.106920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.106935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.590690 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:12:13.855274056 +0000 UTC Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.698978 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.745525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c66f6de8eaf72271e255644365ae73b92de47c8f5bcd151809ba59d944a6d39"} Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.745589 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.745598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb4f247e10fd2640f879066983c858ee45f12947f46937c56fe701ee4de34c2a"} Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.745816 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.746868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.746923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.746943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.747510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.747558 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:49 crc kubenswrapper[4772]: I0124 03:41:49.747575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.039236 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.055479 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.057415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.057475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.057495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.057536 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.591021 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:06:56.424457169 +0000 UTC Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.748180 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.748359 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:50 crc kubenswrapper[4772]: I0124 03:41:50.749715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.074205 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.592147 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:48:24.93495539 +0000 UTC Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.751219 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.751299 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.752879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.752953 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.752978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.753053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.753085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:51 crc kubenswrapper[4772]: I0124 03:41:51.753101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:52 crc kubenswrapper[4772]: I0124 03:41:52.592442 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:25:01.583241284 +0000 UTC Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.593705 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:13:41.616800348 +0000 UTC Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.691675 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.691933 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.695183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.695282 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:53 crc kubenswrapper[4772]: I0124 03:41:53.695304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:53 crc kubenswrapper[4772]: E0124 03:41:53.750917 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 03:41:54 crc kubenswrapper[4772]: I0124 03:41:54.593892 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:15:55.197624283 +0000 UTC Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.197826 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.198151 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.200844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.200910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.200930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.206257 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.594254 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:44:40.008112453 +0000 UTC Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.763553 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.766850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.766915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.766930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:55 crc kubenswrapper[4772]: I0124 03:41:55.772248 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:41:56 crc kubenswrapper[4772]: I0124 03:41:56.595076 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:50:39.117802487 +0000 UTC Jan 24 03:41:56 crc kubenswrapper[4772]: I0124 03:41:56.766390 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:56 crc kubenswrapper[4772]: I0124 03:41:56.768019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:56 crc kubenswrapper[4772]: I0124 03:41:56.768083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:56 crc kubenswrapper[4772]: I0124 03:41:56.768104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:57 crc kubenswrapper[4772]: W0124 03:41:57.406264 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 24 03:41:57 crc kubenswrapper[4772]: I0124 03:41:57.406426 4772 trace.go:236] Trace[409000922]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 03:41:47.405) (total time: 10001ms): Jan 24 03:41:57 crc kubenswrapper[4772]: Trace[409000922]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:41:57.406) Jan 24 03:41:57 crc kubenswrapper[4772]: Trace[409000922]: [10.001196786s] [10.001196786s] END Jan 24 03:41:57 crc kubenswrapper[4772]: E0124 03:41:57.406471 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 03:41:57 crc kubenswrapper[4772]: I0124 03:41:57.587480 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 24 03:41:57 crc kubenswrapper[4772]: I0124 03:41:57.596029 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:19:22.756705968 +0000 UTC Jan 24 03:41:57 crc kubenswrapper[4772]: W0124 03:41:57.727343 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 24 03:41:57 crc kubenswrapper[4772]: I0124 03:41:57.727508 4772 trace.go:236] Trace[1910287923]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 03:41:47.725) (total time: 10001ms): Jan 24 03:41:57 crc kubenswrapper[4772]: Trace[1910287923]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:41:57.727) Jan 24 03:41:57 crc kubenswrapper[4772]: Trace[1910287923]: [10.001541304s] [10.001541304s] END Jan 24 03:41:57 crc kubenswrapper[4772]: E0124 03:41:57.727546 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.110477 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.110790 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.112500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.112566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.112591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:58 crc kubenswrapper[4772]: W0124 03:41:58.169102 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.169279 4772 trace.go:236] Trace[202586067]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 03:41:48.167) (total time: 10001ms): Jan 24 03:41:58 crc kubenswrapper[4772]: Trace[202586067]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:41:58.169) Jan 24 03:41:58 crc kubenswrapper[4772]: Trace[202586067]: [10.001687169s] [10.001687169s] END Jan 24 03:41:58 crc kubenswrapper[4772]: E0124 03:41:58.169317 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.179041 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.290720 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.290848 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.303057 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.303202 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.596183 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:52:07.638534413 +0000 UTC Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.698676 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]log ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]etcd ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-filter ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-apiextensions-informers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-apiextensions-controllers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/crd-informer-synced ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-system-namespaces-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 24 03:41:58 crc kubenswrapper[4772]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/bootstrap-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/start-kube-aggregator-informers ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-registration-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-discovery-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]autoregister-completion ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-openapi-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 24 03:41:58 crc kubenswrapper[4772]: livez check failed Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.698790 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.771629 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.774404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.774495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.774523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.792924 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.854720 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 03:41:58 crc kubenswrapper[4772]: I0124 03:41:58.854834 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 03:41:59 crc kubenswrapper[4772]: I0124 03:41:59.596833 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:45:32.684270047 +0000 UTC Jan 24 03:41:59 crc kubenswrapper[4772]: I0124 03:41:59.774930 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:41:59 crc kubenswrapper[4772]: I0124 03:41:59.776545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:41:59 crc kubenswrapper[4772]: I0124 03:41:59.776609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:41:59 crc kubenswrapper[4772]: I0124 03:41:59.776624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:00 crc kubenswrapper[4772]: I0124 03:42:00.597561 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:10:16.076816181 +0000 UTC Jan 24 03:42:01 crc kubenswrapper[4772]: I0124 03:42:01.598531 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:04:41.436261974 +0000 UTC Jan 24 03:42:01 crc kubenswrapper[4772]: I0124 03:42:01.946495 4772 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 03:42:02 crc kubenswrapper[4772]: I0124 03:42:02.599181 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:13:50.922737132 +0000 UTC Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.257106 4772 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.296582 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.302773 4772 trace.go:236] Trace[99312791]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 03:41:50.709) (total time: 12593ms): Jan 24 03:42:03 crc kubenswrapper[4772]: Trace[99312791]: ---"Objects listed" error: 12593ms (03:42:03.302) Jan 24 03:42:03 crc kubenswrapper[4772]: Trace[99312791]: [12.593413613s] [12.593413613s] END Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.302841 4772 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.303278 4772 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.304447 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.318719 4772 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.331342 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58794->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.331457 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58794->192.168.126.11:17697: read: connection reset by peer" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.371700 4772 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.582661 4772 apiserver.go:52] "Watching apiserver" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.586826 4772 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.587396 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.588008 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.588166 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.588027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.588560 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.588575 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.588647 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.588653 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.589396 4772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.590473 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.591121 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.591623 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594130 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594433 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594445 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594500 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594613 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.594863 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.595211 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.596064 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.599500 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:54:59.640406763 +0000 UTC Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605252 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605296 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605335 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605366 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605394 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605515 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605542 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605568 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605594 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605703 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605710 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605731 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605831 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605859 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605885 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605906 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605931 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605955 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.605975 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606009 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606034 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606054 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606092 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606118 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606143 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606193 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606233 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606254 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606271 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606291 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606316 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606339 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606356 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606374 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606389 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606406 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606422 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606439 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606458 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606475 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606492 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606511 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606528 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606576 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606595 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606614 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606631 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606672 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606716 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606772 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606790 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606807 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606822 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606874 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606891 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606943 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606958 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606974 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606991 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607011 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607053 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607070 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607088 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607105 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607126 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607144 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607161 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607177 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607194 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607226 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607260 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607283 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607325 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607341 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607387 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606175 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606350 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606329 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606633 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606887 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606941 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.606983 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607166 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607345 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607525 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607549 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.608353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.608404 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.608552 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.608946 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609029 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609082 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609412 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609522 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609598 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.609769 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610018 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610457 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610674 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610715 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610892 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.610943 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.611147 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.611145 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.611614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.611627 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612036 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612074 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612101 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612520 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612610 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612600 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612691 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612845 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.612917 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613101 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613320 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613354 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613564 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.613613 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.644399 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.645346 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.645539 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.645561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646169 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646406 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646423 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.646629 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647090 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647208 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647331 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647391 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647575 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647826 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.647880 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.648497 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.607357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.648888 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.648933 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.648990 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649086 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649139 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649229 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649272 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649317 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649408 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649459 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649506 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649549 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649602 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649682 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649723 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649837 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649883 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649930 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649973 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650024 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650064 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650190 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650232 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650273 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650313 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650387 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650427 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650478 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650515 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650563 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650619 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650671 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651249 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651347 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651400 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651447 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651487 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651522 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.649603 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650078 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650236 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.650416 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.651553 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652216 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652262 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652311 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652344 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652376 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652411 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652437 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652500 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652563 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652756 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652801 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652803 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652824 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652849 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652874 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.652902 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653025 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653056 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653105 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653104 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653131 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653187 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653266 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653289 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653314 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653334 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653356 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653385 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653523 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653555 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653580 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653601 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653626 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653653 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653676 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653982 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654028 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654102 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654126 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654174 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654197 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654216 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654241 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654266 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654291 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654311 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654334 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654358 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654379 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654404 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654427 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654448 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654560 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654658 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654680 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654717 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654801 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654828 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654849 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654893 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655114 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655134 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655150 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655162 4772 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655173 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655186 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663618 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663672 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663696 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663721 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663768 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663795 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663810 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663826 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663855 4772 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663874 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663889 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663904 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663934 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663949 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663963 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663979 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663999 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664015 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664029 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664048 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664068 4772 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664083 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664098 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664115 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664135 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664152 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664167 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664185 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664199 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664213 4772 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664229 4772 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664248 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664264 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664277 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664291 4772 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664310 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664323 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664337 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664357 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664370 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664384 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664398 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664416 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664429 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664443 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664460 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664479 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664491 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664505 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664517 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664532 4772 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664546 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664559 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664575 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664589 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664602 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664616 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664632 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664648 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664662 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664675 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664698 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664716 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664734 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664767 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664784 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664801 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664816 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664833 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664849 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664863 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664877 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664894 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664908 4772 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664922 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664906 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664936 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665329 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665362 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665377 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665397 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665416 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665430 4772 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653351 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653468 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653699 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653798 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653853 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.653910 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665692 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.654071 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.655943 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.656043 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.656121 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665770 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.656651 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.657089 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.666154 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:04.166133244 +0000 UTC m=+21.203223969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.657399 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.657834 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.657959 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.657989 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658172 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658184 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658209 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658294 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658308 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658716 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.659270 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.659336 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.659530 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.659590 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.659672 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.658901 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.660137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.661052 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.661173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.662346 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663596 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663617 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663895 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663996 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664160 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664208 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.663161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.664960 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665514 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.656504 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665942 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.665971 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.666073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.666529 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.666570 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.667094 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.667275 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.667188 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.667799 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.667892 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.668100 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.668136 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.668543 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.668651 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.668953 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.669205 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.669481 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.669776 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.670066 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.670369 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.670386 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.670814 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671159 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671467 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671702 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671811 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.671904 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672073 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672157 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.674984 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.675680 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.677511 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.678643 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.678658 4772 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.679743 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672246 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672358 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672796 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.673093 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.673445 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.673822 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.673944 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:04.173914528 +0000 UTC m=+21.211005253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682407 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:04.182376739 +0000 UTC m=+21.219467454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682440 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682432 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682543 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.674448 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.675250 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.676232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682591 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.676705 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.677042 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.677665 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.678165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.678580 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.679585 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.680841 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.680932 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.681058 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.681087 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.681672 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682080 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.672924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.682252 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682934 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682933 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682952 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682971 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682976 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.682990 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.683043 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:04.183029987 +0000 UTC m=+21.220120712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:03 crc kubenswrapper[4772]: E0124 03:42:03.683063 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:04.183055458 +0000 UTC m=+21.220146183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.689245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.689967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.689989 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.686620 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.690829 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.691018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.691359 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.691851 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.692891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.695283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.698453 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.699905 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.701219 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.702978 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.703891 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.704001 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.705562 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.706978 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.709190 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.709908 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.710183 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.711108 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.713811 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.713829 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.716469 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.717915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.718248 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.719220 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.721445 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.721977 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.722491 4772 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.722821 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.722882 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.726551 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.727428 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.728049 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.731305 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.732174 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.732750 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.733769 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.734505 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.735720 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.736604 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.737858 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.738154 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.739895 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.740582 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.742048 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.743042 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.744799 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.745465 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.746404 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.747808 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.748849 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.750435 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.751178 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.752018 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.754519 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.754607 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.754671 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.762604 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.764477 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.766979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767104 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767196 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767208 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767270 4772 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767282 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767294 4772 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767306 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767317 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767329 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767339 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767350 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767360 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767372 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767382 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767394 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767406 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767419 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767491 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767540 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767555 4772 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767570 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767588 4772 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767603 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767621 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767635 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767649 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767662 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767677 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767690 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767705 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767724 4772 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767773 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767789 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767803 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767816 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767829 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767841 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767855 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767869 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767882 4772 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767897 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767913 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767926 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767941 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767954 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767970 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767983 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.767999 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768013 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768026 4772 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768057 4772 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768072 4772 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768086 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768100 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768113 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768129 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768143 4772 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768157 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768171 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768184 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768197 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768210 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768223 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768237 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768250 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768263 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768277 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768291 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768304 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768317 4772 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768329 4772 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768343 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768359 4772 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768374 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768389 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768405 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768419 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768465 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768482 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768495 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768509 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768522 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768536 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768549 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768562 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768575 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768589 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768619 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768632 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768646 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768659 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768672 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768686 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768701 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768713 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768727 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768765 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768785 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768802 4772 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768815 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768828 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768840 4772 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768853 4772 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768865 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768880 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768893 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768906 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.768920 4772 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.770257 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.774271 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.778779 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.788674 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.791400 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad" exitCode=255 Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.791495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad"} Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.792230 4772 scope.go:117] "RemoveContainer" containerID="5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.794730 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.808279 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.818759 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.839403 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.855652 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.884036 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.904154 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.913028 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.924068 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.938524 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.952670 4772 csr.go:261] certificate signing request csr-5fwpr is approved, waiting to be issued Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.970059 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.985921 4772 csr.go:257] certificate signing request csr-5fwpr is issued Jan 24 03:42:03 crc kubenswrapper[4772]: I0124 03:42:03.986529 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.003508 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.028397 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.179111 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.179242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.179339 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:05.179287781 +0000 UTC m=+22.216378496 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.179402 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.179488 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:05.179469876 +0000 UTC m=+22.216560611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.280336 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.280386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.280407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280523 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280560 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280575 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280636 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:05.280618417 +0000 UTC m=+22.317709142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280627 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280796 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:05.280765201 +0000 UTC m=+22.317856086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280522 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280865 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280886 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:04 crc kubenswrapper[4772]: E0124 03:42:04.280935 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:05.280927535 +0000 UTC m=+22.318018260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.600163 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:26:59.174425097 +0000 UTC Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.772357 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kqp8g"] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.772892 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.773785 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sldpz"] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.774103 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.778264 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.779351 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.779601 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.780087 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.780301 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.780362 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.780307 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.800906 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.801103 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c46s"] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.802246 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jvgzj"] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.802436 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.803010 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bnn82"] Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.803259 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.803391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"02afdaadb5fce894ec60e49709fdadf61f66626c2a843662f56527bffc6ccad8"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.803503 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.805445 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.805531 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.805607 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6d2e174a79d2c17b7fe7c7658c65895e2e638d836dd5e5b4595c1cf6e1885dfc"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.805614 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.805726 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.807842 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808005 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808385 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808419 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808560 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808676 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808691 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.808767 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.810321 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.810326 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.812049 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.813480 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.816948 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.817391 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.820579 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.820624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.820638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8bf7535dff86b560e875b94ad4981269469236f0ff8be4251c6bd463bd168679"} Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.826092 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.842204 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.860549 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.872536 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/729441e2-b077-4b39-921e-742c199ff8e2-hosts-file\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885339 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cni-binary-copy\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-k8s-cni-cncf-io\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885384 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-kubelet\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885452 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-conf-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-daemon-config\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-etc-kubernetes\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885582 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cnibin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885605 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-os-release\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-multus-certs\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885730 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-system-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885816 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-bin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885915 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-multus\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885979 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-socket-dir-parent\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.885999 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm42l\" (UniqueName: \"kubernetes.io/projected/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-kube-api-access-lm42l\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.886024 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjrc\" (UniqueName: \"kubernetes.io/projected/729441e2-b077-4b39-921e-742c199ff8e2-kube-api-access-qfjrc\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.886049 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-netns\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.886086 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-hostroot\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.886900 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.904025 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.927387 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.952943 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.967671 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.982935 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987032 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-24 03:37:03 +0000 UTC, rotation deadline is 2026-11-26 20:20:18.892182922 +0000 UTC Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987139 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7360h38m13.905047592s for next certificate rotation Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987426 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cnibin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-os-release\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987515 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29djk\" (UniqueName: \"kubernetes.io/projected/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-kube-api-access-29djk\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987536 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-bin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-system-cni-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987631 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cnibin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987793 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-system-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-bin\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-system-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987870 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-multus\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-cni-multus\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-os-release\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.987968 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988010 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988029 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988049 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988074 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfjrc\" (UniqueName: \"kubernetes.io/projected/729441e2-b077-4b39-921e-742c199ff8e2-kube-api-access-qfjrc\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988093 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-hostroot\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-kubelet\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-cni-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-etc-kubernetes\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988163 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-hostroot\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988177 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-var-lib-kubelet\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988156 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-etc-kubernetes\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988187 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988347 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/729441e2-b077-4b39-921e-742c199ff8e2-hosts-file\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn2g\" (UniqueName: \"kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988452 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/729441e2-b077-4b39-921e-742c199ff8e2-hosts-file\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988523 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988571 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qp8x\" (UniqueName: \"kubernetes.io/projected/d2a6ed09-b342-405d-ba5e-52d60ecfec68-kube-api-access-8qp8x\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988593 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988702 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-multus-certs\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988771 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-multus-certs\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-rootfs\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988827 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988883 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.988927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-proxy-tls\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989061 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-socket-dir-parent\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989128 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-socket-dir-parent\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989168 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm42l\" (UniqueName: \"kubernetes.io/projected/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-kube-api-access-lm42l\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989216 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-netns\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989432 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-conf-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989443 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-netns\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989480 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-daemon-config\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989505 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-conf-dir\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cnibin\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989550 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-os-release\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cni-binary-copy\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-k8s-cni-cncf-io\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989721 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.989790 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.990105 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-host-run-k8s-cni-cncf-io\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.990346 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-multus-daemon-config\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.990437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-cni-binary-copy\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:04 crc kubenswrapper[4772]: I0124 03:42:04.996899 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:04Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.008618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfjrc\" (UniqueName: \"kubernetes.io/projected/729441e2-b077-4b39-921e-742c199ff8e2-kube-api-access-qfjrc\") pod \"node-resolver-sldpz\" (UID: \"729441e2-b077-4b39-921e-742c199ff8e2\") " pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.009463 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm42l\" (UniqueName: \"kubernetes.io/projected/3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d-kube-api-access-lm42l\") pod \"multus-kqp8g\" (UID: \"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\") " pod="openshift-multus/multus-kqp8g" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.013504 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.033285 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.051611 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.070636 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.084987 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kqp8g" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-proxy-tls\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-os-release\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cnibin\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090706 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090804 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29djk\" (UniqueName: \"kubernetes.io/projected/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-kube-api-access-29djk\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090876 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-system-cni-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.090994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091017 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091039 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091067 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091090 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dn2g\" (UniqueName: \"kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091162 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qp8x\" (UniqueName: \"kubernetes.io/projected/d2a6ed09-b342-405d-ba5e-52d60ecfec68-kube-api-access-8qp8x\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091183 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091205 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-rootfs\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091273 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091303 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091332 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091421 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091481 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-system-cni-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091544 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091580 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.091612 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092296 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092435 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.092582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093107 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093123 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-os-release\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093181 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093275 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093322 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-rootfs\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093368 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093407 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2a6ed09-b342-405d-ba5e-52d60ecfec68-cnibin\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093443 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093478 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.093926 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sldpz" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.094325 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.095331 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.099068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-proxy-tls\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.112211 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.122634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29djk\" (UniqueName: \"kubernetes.io/projected/60ea55cf-a32f-46c5-9ad8-dec5dbc808b0-kube-api-access-29djk\") pod \"machine-config-daemon-bnn82\" (UID: \"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\") " pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.126629 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qp8x\" (UniqueName: \"kubernetes.io/projected/d2a6ed09-b342-405d-ba5e-52d60ecfec68-kube-api-access-8qp8x\") pod \"multus-additional-cni-plugins-jvgzj\" (UID: \"d2a6ed09-b342-405d-ba5e-52d60ecfec68\") " pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.127046 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.136222 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dn2g\" (UniqueName: \"kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g\") pod \"ovnkube-node-2c46s\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.138781 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: W0124 03:42:05.151884 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ea55cf_a32f_46c5_9ad8_dec5dbc808b0.slice/crio-1836696bc0da0fd21ee56ef047ff95468151d5a54938cf307074c61b06c2425b WatchSource:0}: Error finding container 1836696bc0da0fd21ee56ef047ff95468151d5a54938cf307074c61b06c2425b: Status 404 returned error can't find the container with id 1836696bc0da0fd21ee56ef047ff95468151d5a54938cf307074c61b06c2425b Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.164623 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.179723 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.191919 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.192031 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:07.192013902 +0000 UTC m=+24.229104627 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.192182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.192311 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.192363 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:07.192356372 +0000 UTC m=+24.229447097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.218430 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.292856 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.292909 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293073 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293091 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293104 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293162 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:07.293147953 +0000 UTC m=+24.330238678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293187 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293217 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293317 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:07.293297837 +0000 UTC m=+24.330388562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293241 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293364 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.293463 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:07.293432011 +0000 UTC m=+24.330522926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.292930 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.415185 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.421893 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" Jan 24 03:42:05 crc kubenswrapper[4772]: W0124 03:42:05.425611 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849e85f7_2aca_4f00_a9be_a5f40979ad26.slice/crio-9b95aa43972ce006c3045df4b91f2149650d66d2921a0c4b42ecc09893969877 WatchSource:0}: Error finding container 9b95aa43972ce006c3045df4b91f2149650d66d2921a0c4b42ecc09893969877: Status 404 returned error can't find the container with id 9b95aa43972ce006c3045df4b91f2149650d66d2921a0c4b42ecc09893969877 Jan 24 03:42:05 crc kubenswrapper[4772]: W0124 03:42:05.440853 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a6ed09_b342_405d_ba5e_52d60ecfec68.slice/crio-2eb6325daeefb09a7952cc1a69442e83cca1cc55d1cfaf0b6186c93a4838b034 WatchSource:0}: Error finding container 2eb6325daeefb09a7952cc1a69442e83cca1cc55d1cfaf0b6186c93a4838b034: Status 404 returned error can't find the container with id 2eb6325daeefb09a7952cc1a69442e83cca1cc55d1cfaf0b6186c93a4838b034 Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.600454 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:55:53.105256565 +0000 UTC Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.657950 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.658000 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.657983 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.658135 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.658250 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:05 crc kubenswrapper[4772]: E0124 03:42:05.658487 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.663439 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.664638 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.665449 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.667139 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.667925 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.669487 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.670534 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.671887 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.824497 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sldpz" event={"ID":"729441e2-b077-4b39-921e-742c199ff8e2","Type":"ContainerStarted","Data":"9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.824554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sldpz" event={"ID":"729441e2-b077-4b39-921e-742c199ff8e2","Type":"ContainerStarted","Data":"592d2828bcf3606d9b68512b88f55d98aa052d8a2eb1c7d3e7d9307cb322a71f"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.825829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerStarted","Data":"7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.825863 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerStarted","Data":"2eb6325daeefb09a7952cc1a69442e83cca1cc55d1cfaf0b6186c93a4838b034"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.827028 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" exitCode=0 Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.827103 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.827157 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"9b95aa43972ce006c3045df4b91f2149650d66d2921a0c4b42ecc09893969877"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.829201 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.829228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.829240 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"1836696bc0da0fd21ee56ef047ff95468151d5a54938cf307074c61b06c2425b"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.830500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerStarted","Data":"ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.830528 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerStarted","Data":"3affc96780b1dc1010d19984cce1aba88211b3207d9cab93159936aed9f7bccc"} Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.838104 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.853519 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.864634 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.869449 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.879684 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.883621 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.885628 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.903251 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:05 crc kubenswrapper[4772]: I0124 03:42:05.975324 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:05Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.010053 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.046899 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.097900 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.121109 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.139261 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.155379 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.169166 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.181595 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.192079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.206146 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.221065 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.237261 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.250709 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.266161 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.284416 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.300337 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.315729 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.329977 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.359786 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.601379 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:51:05.414673341 +0000 UTC Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.789461 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gpkkg"] Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.789849 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.791818 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.794093 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.795134 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.796521 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.815172 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.832827 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.835752 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf" exitCode=0 Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.835806 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.839152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.839211 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.839247 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.839282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.840536 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e"} Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.849542 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.897544 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.912080 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.914642 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d4108fb-011c-4894-9f56-25a4d59d67cb-host\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.914674 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9d4108fb-011c-4894-9f56-25a4d59d67cb-serviceca\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.914726 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8rvp\" (UniqueName: \"kubernetes.io/projected/9d4108fb-011c-4894-9f56-25a4d59d67cb-kube-api-access-r8rvp\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.924909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.938076 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.949172 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.963158 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.976316 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:06 crc kubenswrapper[4772]: I0124 03:42:06.992105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:06Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.003860 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.015869 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d4108fb-011c-4894-9f56-25a4d59d67cb-host\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.021318 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d4108fb-011c-4894-9f56-25a4d59d67cb-host\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.021644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9d4108fb-011c-4894-9f56-25a4d59d67cb-serviceca\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.021727 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8rvp\" (UniqueName: \"kubernetes.io/projected/9d4108fb-011c-4894-9f56-25a4d59d67cb-kube-api-access-r8rvp\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.023020 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9d4108fb-011c-4894-9f56-25a4d59d67cb-serviceca\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.034327 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.056104 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.067441 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8rvp\" (UniqueName: \"kubernetes.io/projected/9d4108fb-011c-4894-9f56-25a4d59d67cb-kube-api-access-r8rvp\") pod \"node-ca-gpkkg\" (UID: \"9d4108fb-011c-4894-9f56-25a4d59d67cb\") " pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.093239 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.116103 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.124951 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gpkkg" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.140369 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.156007 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.171538 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.186234 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.229558 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.229803 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.230081 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:11.230031209 +0000 UTC m=+28.267121954 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.230413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.230554 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.230626 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:11.230610394 +0000 UTC m=+28.267701119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.253443 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.272409 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.284296 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.297721 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.307934 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.323987 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.331469 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.331511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.331551 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331689 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331725 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331755 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331688 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331787 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331782 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331815 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:11.331796356 +0000 UTC m=+28.368887081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331800 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331889 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:11.331868798 +0000 UTC m=+28.368959523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.331918 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:11.331904899 +0000 UTC m=+28.368995624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.337251 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.603985 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:38:59.187648465 +0000 UTC Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.658004 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.658042 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.658277 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.658274 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.658399 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:07 crc kubenswrapper[4772]: E0124 03:42:07.658550 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.854860 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d" exitCode=0 Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.854923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d"} Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.858177 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gpkkg" event={"ID":"9d4108fb-011c-4894-9f56-25a4d59d67cb","Type":"ContainerStarted","Data":"49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5"} Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.858282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gpkkg" event={"ID":"9d4108fb-011c-4894-9f56-25a4d59d67cb","Type":"ContainerStarted","Data":"109704ccbf00d38a68c1763686fab37c3d951d449356dc68443cd03f13ce1939"} Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.867575 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3"} Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.867641 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef"} Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.871656 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.885550 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.902323 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.917564 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.936164 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.952026 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.969938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.983303 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:07 crc kubenswrapper[4772]: I0124 03:42:07.999593 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:07Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.015097 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.039293 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.054376 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.068285 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.080835 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.096797 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.110727 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.123137 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.141410 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.158507 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.173338 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.184852 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.200229 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.211149 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.228732 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.241642 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.253419 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.268010 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.280292 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.604574 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:45:08.035701128 +0000 UTC Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.874408 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604" exitCode=0 Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.874483 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604"} Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.896983 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.916230 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.937550 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:08 crc kubenswrapper[4772]: I0124 03:42:08.993456 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:08Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.015188 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.043355 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.056416 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.072651 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.082222 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.093312 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.109165 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.125201 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.138907 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.158363 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.605595 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:26:50.090162407 +0000 UTC Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.658359 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.658483 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.658597 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.658503 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.658804 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.658804 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.704563 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.707178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.707231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.707250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.707401 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.718911 4772 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.719291 4772 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.721198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.721251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.721270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.721295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.721336 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.744455 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.749644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.749697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.749713 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.749739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.749788 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.769603 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.774304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.774381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.774400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.774432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.774454 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.791169 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.796275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.796358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.796378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.796406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.796424 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.816618 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.821655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.821747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.821818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.821857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.821886 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.838445 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: E0124 03:42:09.838814 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.841174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.841242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.841260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.841285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.841303 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.883157 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f" exitCode=0 Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.883260 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f"} Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.889893 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8"} Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.907035 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.926611 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.944264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.944305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.944316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.944338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.944350 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:09Z","lastTransitionTime":"2026-01-24T03:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.981049 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:09 crc kubenswrapper[4772]: I0124 03:42:09.998126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:09Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.026213 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.040438 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.047543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.047588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.047598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.047616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.047628 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.056652 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.069181 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.084452 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.099411 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.113454 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.131926 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.147818 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.150704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.150750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.150764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.150799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.150813 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.164235 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.254378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.254439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.254452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.254477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.254491 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.357854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.357905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.357915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.357934 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.357946 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.461066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.461144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.461163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.461196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.461216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.564148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.564203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.564216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.564238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.564253 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.606162 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:27:01.17422373 +0000 UTC Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.667434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.667482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.667494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.667510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.667521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.770968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.771011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.771021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.771042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.771052 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.874050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.874120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.874135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.874160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.874186 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.901950 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70" exitCode=0 Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.902022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70"} Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.921819 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.942865 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.961558 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.977136 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.978341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.978366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.978374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.978388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:10 crc kubenswrapper[4772]: I0124 03:42:10.978397 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:10Z","lastTransitionTime":"2026-01-24T03:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.002563 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:10Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.020589 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.043210 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.058223 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.073589 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.082711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.082787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.082808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.082826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.082838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.092431 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.107475 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.118112 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.131141 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.153766 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.185188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.185238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.185251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.185272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.185286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.274828 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.274921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.275062 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.275031571 +0000 UTC m=+36.312122296 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.275270 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.275479 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.275424792 +0000 UTC m=+36.312515557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.288532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.288601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.288619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.288648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.288667 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.376854 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.376955 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.377158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377182 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377290 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377302 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377396 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.377367805 +0000 UTC m=+36.414458540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377185 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377434 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377456 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377399 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377556 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.377525189 +0000 UTC m=+36.414615954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.377688 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.377653492 +0000 UTC m=+36.414744227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.392077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.392139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.392155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.392180 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.392197 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.495681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.495730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.495786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.495817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.495831 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.599223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.599268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.599287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.599312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.599328 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.606455 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:07:04.837388909 +0000 UTC Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.658360 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.658451 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.658500 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.658451 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.658645 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:11 crc kubenswrapper[4772]: E0124 03:42:11.658813 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.701947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.702004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.702018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.702044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.702057 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.805709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.805785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.805799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.805819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.805832 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.907881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.908374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.908391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.908409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.908422 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:11Z","lastTransitionTime":"2026-01-24T03:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.912889 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.913615 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.913656 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.920213 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2a6ed09-b342-405d-ba5e-52d60ecfec68" containerID="948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4" exitCode=0 Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.920274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerDied","Data":"948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4"} Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.930652 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.949563 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.949840 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.959895 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.965710 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.979280 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:11 crc kubenswrapper[4772]: I0124 03:42:11.991178 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.006760 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.012177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.012227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.012240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.012261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.012274 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.020613 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.033834 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.048779 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.061425 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.077372 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.095619 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115528 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.115679 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.144517 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.170618 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.182303 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.196917 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.214786 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.218619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.218663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.218676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.218698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.218714 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.229989 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.245798 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.314670 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.321535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.321584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.321595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.321663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.321676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.333866 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.353439 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.369520 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.391699 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.405079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.418461 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.424355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.424404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.424415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.424435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.424448 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.432407 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.527578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.527626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.527640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.527660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.527681 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.606731 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:53:47.443495337 +0000 UTC Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.630481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.630547 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.630560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.630582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.630597 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.733423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.733467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.733476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.733493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.733538 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.836687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.836757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.836770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.836792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.836806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.932254 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" event={"ID":"d2a6ed09-b342-405d-ba5e-52d60ecfec68","Type":"ContainerStarted","Data":"58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.932358 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.939573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.939643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.939662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.939691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.939711 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:12Z","lastTransitionTime":"2026-01-24T03:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.957466 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.975627 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:12 crc kubenswrapper[4772]: I0124 03:42:12.993365 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:12Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.032515 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.042715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.042839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.042861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.042889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.042911 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.053908 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.077315 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.094514 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.118401 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.136956 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.145804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.145911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.145932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.145986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.146006 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.161967 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.190424 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.213674 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.247372 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.249496 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.249528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.249542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.249560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.249573 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.269638 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.353056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.353119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.353131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.353152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.353166 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.456376 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.456430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.456442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.456461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.456477 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.468573 4772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.559577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.559638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.559653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.559678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.559692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.607096 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:15:07.877806797 +0000 UTC Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.658894 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.659031 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:13 crc kubenswrapper[4772]: E0124 03:42:13.659063 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.659154 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:13 crc kubenswrapper[4772]: E0124 03:42:13.659328 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:13 crc kubenswrapper[4772]: E0124 03:42:13.659493 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.662369 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.662409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.662418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.662434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.662444 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.679265 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.699615 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.724966 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.758126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.763913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.763936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.763947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.763963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.763974 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.774351 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.793675 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.807353 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.830106 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.845775 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.862369 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.866375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.866420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.866431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.866448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.866457 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.877370 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.897051 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.910289 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.922449 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:13Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.935623 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.969243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.969522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.969697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.970018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:13 crc kubenswrapper[4772]: I0124 03:42:13.970234 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:13Z","lastTransitionTime":"2026-01-24T03:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.072989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.073027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.073036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.073053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.073062 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.175822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.175862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.175877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.175897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.175933 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.278554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.278618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.278633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.278653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.278672 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.381174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.381212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.381225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.381256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.381269 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.484383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.484426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.484436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.484453 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.484467 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.588135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.588196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.588208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.588229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.588243 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.607650 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:29:54.280930502 +0000 UTC Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.691890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.691939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.691950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.691968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.691979 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.794471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.794538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.794559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.794588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.794608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.898211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.898334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.898398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.898436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.898499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:14Z","lastTransitionTime":"2026-01-24T03:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.941473 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/0.log" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.945194 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd" exitCode=1 Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.945256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd"} Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.946799 4772 scope.go:117] "RemoveContainer" containerID="53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.966269 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:14Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:14 crc kubenswrapper[4772]: I0124 03:42:14.989534 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:14Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.002696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.002841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.002868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.002901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.002924 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.010875 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.029176 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.055277 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.073036 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.097056 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.107007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.107073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.107093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.107146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.107167 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.112834 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.137364 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.155320 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.170985 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.186961 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.201857 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.208973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.209008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.209019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.209036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.209045 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.224916 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:14Z\\\",\\\"message\\\":\\\"ndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0124 03:42:14.264948 6038 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0124 03:42:14.264988 6038 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0124 03:42:14.264994 6038 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 1.081969ms\\\\nI0124 03:42:14.264986 6038 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.311412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.311454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.311464 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.311480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.311491 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.316644 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.413538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.413605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.413628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.413654 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.413688 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.516279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.516323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.516337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.516356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.516370 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.608291 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:44:27.273988547 +0000 UTC Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.619312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.619352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.619367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.619386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.619398 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.658883 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.658883 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.659418 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:15 crc kubenswrapper[4772]: E0124 03:42:15.659573 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:15 crc kubenswrapper[4772]: E0124 03:42:15.659503 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:15 crc kubenswrapper[4772]: E0124 03:42:15.659969 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.722069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.722113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.722124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.722144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.722158 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.824763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.824801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.824811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.824826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.824838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.927801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.927836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.927845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.927859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.927869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:15Z","lastTransitionTime":"2026-01-24T03:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.949147 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/0.log" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.951983 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191"} Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.952347 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.965288 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.977300 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:15 crc kubenswrapper[4772]: I0124 03:42:15.989127 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.006897 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:14Z\\\",\\\"message\\\":\\\"ndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0124 03:42:14.264948 6038 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0124 03:42:14.264988 6038 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0124 03:42:14.264994 6038 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 1.081969ms\\\\nI0124 03:42:14.264986 6038 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.017993 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.029196 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.030211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.030249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.030263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.030281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.030294 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.039193 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.053842 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.065075 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.082203 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.093872 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.108839 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.121950 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.132312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.132469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.132645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.132762 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.132837 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.133790 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.235561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.235604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.235612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.235626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.235637 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.338530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.338569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.338578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.338595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.338608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.440775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.440810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.440818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.440833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.440843 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.543432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.543471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.543482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.543499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.543509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.609071 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:31:36.596506048 +0000 UTC Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.645263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.645299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.645308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.645322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.645330 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.748566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.748613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.748623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.748639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.748649 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.850765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.850852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.850871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.850897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.850916 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.953725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.953786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.953795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.953811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.953823 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:16Z","lastTransitionTime":"2026-01-24T03:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.956949 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/1.log" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.957929 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/0.log" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.961510 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191" exitCode=1 Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.961582 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191"} Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.961670 4772 scope.go:117] "RemoveContainer" containerID="53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.962827 4772 scope.go:117] "RemoveContainer" containerID="8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191" Jan 24 03:42:16 crc kubenswrapper[4772]: E0124 03:42:16.963202 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:16 crc kubenswrapper[4772]: I0124 03:42:16.980531 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:16Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.010206 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.028374 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.049959 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.056039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.056086 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.056096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.056114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.056124 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.070599 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.085204 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.104163 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.126809 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.142126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.159240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.159292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.159310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.159334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.159352 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.164126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.185062 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.218331 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:14Z\\\",\\\"message\\\":\\\"ndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0124 03:42:14.264948 6038 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0124 03:42:14.264988 6038 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0124 03:42:14.264994 6038 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 1.081969ms\\\\nI0124 03:42:14.264986 6038 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.238295 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.255019 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.262667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.262723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.262739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.262783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.262798 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.365593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.365658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.365679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.365704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.365723 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.470292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.470342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.470354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.470374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.470393 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.573773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.574203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.574387 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.574546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.574684 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.609928 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:34:59.96157932 +0000 UTC Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.658359 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.658450 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:17 crc kubenswrapper[4772]: E0124 03:42:17.658578 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.658686 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:17 crc kubenswrapper[4772]: E0124 03:42:17.658879 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:17 crc kubenswrapper[4772]: E0124 03:42:17.659078 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.677695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.678038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.678451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.678669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.678903 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.782091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.782798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.782912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.783038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.783486 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.796078 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f"] Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.796721 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.799106 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.799178 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.815153 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.835669 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.853506 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.871081 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.871142 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvmm\" (UniqueName: \"kubernetes.io/projected/bfcaf254-b568-4170-8068-e55bd06685a4-kube-api-access-hwvmm\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.871281 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.871456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfcaf254-b568-4170-8068-e55bd06685a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.876560 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.889229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.889303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.889332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.889365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.889388 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.890373 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.913474 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.928166 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.945699 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.963045 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.965733 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/1.log" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.969379 4772 scope.go:117] "RemoveContainer" containerID="8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191" Jan 24 03:42:17 crc kubenswrapper[4772]: E0124 03:42:17.969526 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.972815 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.973001 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvmm\" (UniqueName: \"kubernetes.io/projected/bfcaf254-b568-4170-8068-e55bd06685a4-kube-api-access-hwvmm\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.973119 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.973252 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfcaf254-b568-4170-8068-e55bd06685a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.973939 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.973938 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bfcaf254-b568-4170-8068-e55bd06685a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.978698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bfcaf254-b568-4170-8068-e55bd06685a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.982062 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.991704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.991786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.991801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.991825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.991839 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:17Z","lastTransitionTime":"2026-01-24T03:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.994383 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvmm\" (UniqueName: \"kubernetes.io/projected/bfcaf254-b568-4170-8068-e55bd06685a4-kube-api-access-hwvmm\") pod \"ovnkube-control-plane-749d76644c-ljl6f\" (UID: \"bfcaf254-b568-4170-8068-e55bd06685a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:17 crc kubenswrapper[4772]: I0124 03:42:17.995508 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:17Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.014097 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.033465 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.050592 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.071507 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53990cf184dd21929ab67c48d26630ee39079ec6c888d4ad0e22515f061da9dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:14Z\\\",\\\"message\\\":\\\"ndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0124 03:42:14.264948 6038 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0124 03:42:14.264988 6038 services_controller.go:444] Built service openshift-oauth-apiserver/api LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0124 03:42:14.264994 6038 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 1.081969ms\\\\nI0124 03:42:14.264986 6038 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.085881 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.093927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.094018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.094034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.094052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.094065 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.103202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.116862 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.120657 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.142776 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.162060 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.174240 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.183177 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.198947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.198991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.199001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.199016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.199026 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.207947 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.218609 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.232939 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.248453 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.264167 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.278122 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.297271 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.301339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.301393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.301404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.301424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.301436 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.308706 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.405097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.405136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.405151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.405169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.405180 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.507771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.507800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.507810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.507824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.507834 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610078 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:39:11.489578978 +0000 UTC Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610245 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.610302 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.713163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.713219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.713230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.713251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.713270 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.780510 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.799314 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.816226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.816262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.816271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.816286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.816297 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.818804 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.831521 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.845661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.866576 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.881534 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.892370 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mpdb8"] Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.893192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:18 crc kubenswrapper[4772]: E0124 03:42:18.893295 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.912689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.921304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.921385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.921407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.921433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.921449 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:18Z","lastTransitionTime":"2026-01-24T03:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.929592 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.942050 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.954507 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.968452 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.974923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" event={"ID":"bfcaf254-b568-4170-8068-e55bd06685a4","Type":"ContainerStarted","Data":"920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.974981 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" event={"ID":"bfcaf254-b568-4170-8068-e55bd06685a4","Type":"ContainerStarted","Data":"84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.974996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" event={"ID":"bfcaf254-b568-4170-8068-e55bd06685a4","Type":"ContainerStarted","Data":"d97c6b06cb0417679d56444660f8b6a807cf10e2a0a1afd3e75774308ad5cad3"} Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.982712 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.994520 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:18Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.997213 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:18 crc kubenswrapper[4772]: I0124 03:42:18.997295 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2xrm\" (UniqueName: \"kubernetes.io/projected/e8311b11-97fe-4657-add7-66fd66adc69f-kube-api-access-g2xrm\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.006508 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.017223 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.024151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.024176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.024185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.024201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.024212 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.030784 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.046398 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.073610 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.090856 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.099035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2xrm\" (UniqueName: \"kubernetes.io/projected/e8311b11-97fe-4657-add7-66fd66adc69f-kube-api-access-g2xrm\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.099136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.099272 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.099332 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:19.599313728 +0000 UTC m=+36.636404453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.102996 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.117191 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.125772 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2xrm\" (UniqueName: \"kubernetes.io/projected/e8311b11-97fe-4657-add7-66fd66adc69f-kube-api-access-g2xrm\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.127239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.127264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.127274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.127289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.127299 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.129872 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.145028 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.158600 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.172391 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.190789 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.207559 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.225493 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.229620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.229664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.229681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.229705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.229724 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.245727 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.258518 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.270272 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:19Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.300882 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.301156 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.301379 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.301488 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:35.301465955 +0000 UTC m=+52.338556720 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.301573 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:42:35.301559678 +0000 UTC m=+52.338650433 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.332893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.332941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.332958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.333002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.333015 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.402494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.402595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.402635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.402824 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.402987 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:35.402950705 +0000 UTC m=+52.440041470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403160 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403262 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403367 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403487 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:35.40347385 +0000 UTC m=+52.440564585 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403254 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403626 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403652 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.403787 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:42:35.403727837 +0000 UTC m=+52.440818592 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.435557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.435620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.435641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.435668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.435687 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.539208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.539286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.539313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.539349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.539374 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.605693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.605944 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.606080 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:20.606043908 +0000 UTC m=+37.643134663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.610391 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:42:07.787620551 +0000 UTC Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.643515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.643647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.643731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.643780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.643801 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.658575 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.658671 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.658841 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.658924 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.658848 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:19 crc kubenswrapper[4772]: E0124 03:42:19.659002 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.746884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.746929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.746939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.746955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.746968 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.849395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.849458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.849477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.849502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.849520 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.951981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.952019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.952027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.952042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:19 crc kubenswrapper[4772]: I0124 03:42:19.952052 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:19Z","lastTransitionTime":"2026-01-24T03:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.054685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.054731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.054772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.054796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.054812 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.132461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.132490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.132500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.132513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.132523 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.146648 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:20Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.150429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.150590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.151019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.151222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.151311 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.166051 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:20Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.174138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.174191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.174213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.174243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.174271 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.198478 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:20Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.204538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.204648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.204670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.204697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.204717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.227673 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:20Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.231960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.232023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.232037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.232059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.232072 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.246583 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:20Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.246869 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.248947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.249014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.249031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.249048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.249061 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.351824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.351902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.351928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.351955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.351972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.455610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.455667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.455681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.455704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.455726 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.558886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.558935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.558949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.558965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.558975 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.611119 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:31:20.862400277 +0000 UTC Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.618237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.618505 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.618603 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:22.618577194 +0000 UTC m=+39.655667959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.658118 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:20 crc kubenswrapper[4772]: E0124 03:42:20.658354 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.663053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.663123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.663152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.663184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.663299 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.766598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.766663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.766685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.766717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.766787 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.870203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.870279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.870298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.870326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.870345 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.973669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.973775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.973795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.973824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:20 crc kubenswrapper[4772]: I0124 03:42:20.973841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:20Z","lastTransitionTime":"2026-01-24T03:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.076253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.076314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.076330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.076363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.076387 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.179833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.179873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.179881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.179897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.179909 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.283355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.283435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.283464 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.283494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.283517 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.386870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.386954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.386977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.387007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.387028 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.490491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.490574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.490601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.490636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.490663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.593835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.593883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.593895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.593916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.593930 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.612173 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:28:11.338144912 +0000 UTC Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.658160 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.658307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:21 crc kubenswrapper[4772]: E0124 03:42:21.658536 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.658611 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:21 crc kubenswrapper[4772]: E0124 03:42:21.658708 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:21 crc kubenswrapper[4772]: E0124 03:42:21.658922 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.697166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.697211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.697221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.697236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.697245 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.800185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.800244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.800266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.800296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.800320 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.903171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.903244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.903264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.903291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:21 crc kubenswrapper[4772]: I0124 03:42:21.903310 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:21Z","lastTransitionTime":"2026-01-24T03:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.006651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.006699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.006709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.006725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.006759 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.110377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.110442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.110460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.110487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.110507 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.213406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.213479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.213500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.213527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.213606 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.360955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.361259 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.361482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.361592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.361690 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.466933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.467010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.467030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.467059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.467083 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.569908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.569952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.569964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.570002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.570016 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.613203 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:19:59.609363143 +0000 UTC Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.658955 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:22 crc kubenswrapper[4772]: E0124 03:42:22.659244 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.663502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:22 crc kubenswrapper[4772]: E0124 03:42:22.663697 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:22 crc kubenswrapper[4772]: E0124 03:42:22.663828 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:26.663801137 +0000 UTC m=+43.700891902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.672271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.672314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.672332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.672355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.672374 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.775235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.775290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.775306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.775332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.775349 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.877546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.877586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.877595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.877609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.877619 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.980428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.980479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.980491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.980507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:22 crc kubenswrapper[4772]: I0124 03:42:22.980519 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:22Z","lastTransitionTime":"2026-01-24T03:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.084277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.084452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.084479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.084511 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.084534 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.187724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.187801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.187820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.187841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.187857 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.291151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.291250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.291268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.291294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.291313 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.394859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.394961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.394998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.395033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.395055 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.497995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.498035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.498044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.498060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.498070 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.600943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.601017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.601037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.601062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.601081 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.614166 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:44:10.860067342 +0000 UTC Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.657969 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.658028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:23 crc kubenswrapper[4772]: E0124 03:42:23.658134 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.658178 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:23 crc kubenswrapper[4772]: E0124 03:42:23.658259 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:23 crc kubenswrapper[4772]: E0124 03:42:23.658339 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.679424 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.697549 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.703618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.703663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.703675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.703691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.703702 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.717105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.734510 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.750702 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.776826 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.797859 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.808798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.808875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.808900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.808933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.808956 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.830860 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.849825 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.870263 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.884227 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.895604 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.911492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.911546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.911566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.911591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.911610 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:23Z","lastTransitionTime":"2026-01-24T03:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.914431 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.925876 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.945127 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:23 crc kubenswrapper[4772]: I0124 03:42:23.961095 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:23Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.013504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.013566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.013575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.013593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.013605 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.116374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.116416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.116428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.116445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.116455 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.219137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.219192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.219201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.219218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.219227 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.321918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.321988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.322007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.322033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.322051 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.426176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.426246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.426265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.426296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.426317 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.529656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.529716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.529733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.529796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.529834 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.614904 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:37:26.491752779 +0000 UTC Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.633907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.633993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.634019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.634052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.634076 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.658718 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:24 crc kubenswrapper[4772]: E0124 03:42:24.658933 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.737987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.738056 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.738074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.738552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.738628 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.841567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.841656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.841670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.841688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.841702 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.945122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.945188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.945212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.945281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:24 crc kubenswrapper[4772]: I0124 03:42:24.945305 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:24Z","lastTransitionTime":"2026-01-24T03:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.048108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.048981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.049164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.049305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.049426 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.153264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.153342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.153363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.153394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.153416 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.256995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.257085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.257108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.257184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.257207 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.361567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.361630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.361653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.361683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.361705 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.464995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.465077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.465104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.465137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.465162 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.572628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.572725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.572800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.572849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.572931 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.615223 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:35:11.643821492 +0000 UTC Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.659140 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:25 crc kubenswrapper[4772]: E0124 03:42:25.659333 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.661151 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:25 crc kubenswrapper[4772]: E0124 03:42:25.661522 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.661695 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:25 crc kubenswrapper[4772]: E0124 03:42:25.661824 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.679649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.679708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.679734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.679808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.679833 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.783257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.783325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.783345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.783370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.783387 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.887225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.887303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.887324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.887352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.887373 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.990368 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.990429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.990447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.990468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:25 crc kubenswrapper[4772]: I0124 03:42:25.990481 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:25Z","lastTransitionTime":"2026-01-24T03:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.093908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.093975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.093993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.094018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.094037 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.198246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.198550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.198581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.198608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.198627 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.301502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.301539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.301547 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.301564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.301574 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.404478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.404534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.404550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.404575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.404593 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.507007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.507057 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.507069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.507087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.507099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.609221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.609287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.609311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.609343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.609366 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.615386 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:13:31.970912668 +0000 UTC Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.658157 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:26 crc kubenswrapper[4772]: E0124 03:42:26.658324 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.707633 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:26 crc kubenswrapper[4772]: E0124 03:42:26.707936 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:26 crc kubenswrapper[4772]: E0124 03:42:26.708037 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:34.708005548 +0000 UTC m=+51.745096313 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.713978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.714049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.714072 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.714100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.714123 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.816928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.816960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.816968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.816981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.817003 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.919880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.919908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.919917 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.919930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:26 crc kubenswrapper[4772]: I0124 03:42:26.919967 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:26Z","lastTransitionTime":"2026-01-24T03:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.023051 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.023384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.023564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.023723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.023949 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.127910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.128698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.128909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.129046 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.129168 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.231595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.231810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.231898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.231976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.232049 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.335035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.335071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.335084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.335100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.335110 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.438473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.438548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.438567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.438594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.438613 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.541457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.541520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.541552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.541599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.541622 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.615608 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:20:35.356742339 +0000 UTC Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.645077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.645132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.645152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.645182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.645200 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.658500 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:27 crc kubenswrapper[4772]: E0124 03:42:27.658653 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.658826 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.658996 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:27 crc kubenswrapper[4772]: E0124 03:42:27.659167 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:27 crc kubenswrapper[4772]: E0124 03:42:27.659292 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.747907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.747979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.747998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.748028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.748049 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.850793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.850875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.850895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.850925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.850944 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.953649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.953700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.953718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.953773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:27 crc kubenswrapper[4772]: I0124 03:42:27.953798 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:27Z","lastTransitionTime":"2026-01-24T03:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.057339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.057730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.057967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.058149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.058279 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.161182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.162040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.162256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.162451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.162608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.265911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.265969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.265991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.266024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.266046 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.368802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.368865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.368885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.368910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.368928 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.472090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.472633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.472869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.473070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.473201 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.576862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.576919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.576939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.576967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.576985 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.616353 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:44:26.992737402 +0000 UTC Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.658356 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:28 crc kubenswrapper[4772]: E0124 03:42:28.658536 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.659428 4772 scope.go:117] "RemoveContainer" containerID="8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.681271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.681722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.681774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.681809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.681829 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.784689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.784808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.784840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.784877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.784901 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.888280 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.888339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.888358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.888381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.888397 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.990564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.990610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.990625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.990646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:28 crc kubenswrapper[4772]: I0124 03:42:28.990660 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:28Z","lastTransitionTime":"2026-01-24T03:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.012616 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/1.log" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.014920 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.016017 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.028499 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.040002 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.051441 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.070615 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.085075 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.093457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.093500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.093525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.093542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.093552 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.105652 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.121141 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.138264 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.154570 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.180527 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.196376 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.196430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.196448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.196476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.196495 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.201597 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.229345 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.250350 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.266547 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.289156 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.298976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.299022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.299034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.299054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.299065 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.309514 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:29Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.401699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.401734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.401756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.401771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.401781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.504395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.504431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.504440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.504454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.504463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.608308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.608377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.608399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.608426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.608444 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.617505 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:31:22.378044372 +0000 UTC Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.658685 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.658832 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:29 crc kubenswrapper[4772]: E0124 03:42:29.658903 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.659028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:29 crc kubenswrapper[4772]: E0124 03:42:29.659169 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:29 crc kubenswrapper[4772]: E0124 03:42:29.659282 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.711284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.711333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.711352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.711374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.711393 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.814601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.814662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.814687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.814718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.814763 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.918119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.918185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.918207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.918235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:29 crc kubenswrapper[4772]: I0124 03:42:29.918255 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:29Z","lastTransitionTime":"2026-01-24T03:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020426 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.020366 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/2.log" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.021275 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/1.log" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.024188 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" exitCode=1 Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.024232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.024280 4772 scope.go:117] "RemoveContainer" containerID="8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.025376 4772 scope.go:117] "RemoveContainer" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.025780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.039258 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.053024 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.063600 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.078496 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.093292 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.107800 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123075 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.123271 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.136095 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.150379 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.167931 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ce9198c200b853c65d2c100fbca100103f9b92ebb3a65dc24b652e59bd77191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:16Z\\\",\\\"message\\\":\\\"ecause it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:15Z is after 2025-08-24T17:21:41Z]\\\\nI0124 03:42:15.769490 6180 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:15.769492 6180 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0124 03:42:15.769524 6180 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0124 03:42:15.769491 6180 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.179554 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.191500 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.205840 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.215845 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.225167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.225208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.225222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.225242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.225256 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.231088 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.240655 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.327729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.327793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.327806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.327822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.327832 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.430645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.430707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.430723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.430773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.430792 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.507811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.507895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.507920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.507952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.507972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.527266 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.533048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.533097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.533108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.533125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.533137 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.550213 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.554813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.554890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.554910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.554935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.554953 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.571271 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.575556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.575621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.575647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.575683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.575707 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.592254 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.597411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.597454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.597463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.597478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.597490 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.615898 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:30Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.616019 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.617780 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:53:32.319286583 +0000 UTC Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.618268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.618297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.618307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.618322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.618334 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.658231 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:30 crc kubenswrapper[4772]: E0124 03:42:30.658386 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.721276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.721324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.721340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.721361 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.721377 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.823818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.823901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.823927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.823952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.823974 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.926719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.926820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.926840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.926864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:30 crc kubenswrapper[4772]: I0124 03:42:30.926881 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:30Z","lastTransitionTime":"2026-01-24T03:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.029520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.029564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.029576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.029593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.029605 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.030785 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/2.log" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.035685 4772 scope.go:117] "RemoveContainer" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" Jan 24 03:42:31 crc kubenswrapper[4772]: E0124 03:42:31.035982 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.056830 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.077117 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.100560 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.133871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.134005 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.134034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.134060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.134078 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.134659 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.152514 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.173571 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.195231 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.213046 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.236935 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.237460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.237557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.237580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.237608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.237626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.255723 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.275379 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.300240 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.324059 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.340804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.340858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.340876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.340902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.340924 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.347885 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.368581 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.383488 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:31Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.443219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.443272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.443289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.443315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.443333 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.546380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.546430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.546439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.546455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.546466 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.618554 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:09:58.300983946 +0000 UTC Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.653732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.653816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.653837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.653865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.653883 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.658149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.658225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:31 crc kubenswrapper[4772]: E0124 03:42:31.658316 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:31 crc kubenswrapper[4772]: E0124 03:42:31.658509 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.658145 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:31 crc kubenswrapper[4772]: E0124 03:42:31.658640 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.756848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.756901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.756910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.756950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.756961 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.859445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.859494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.859506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.859526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.859540 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.961600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.961670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.961690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.961722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:31 crc kubenswrapper[4772]: I0124 03:42:31.961782 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:31Z","lastTransitionTime":"2026-01-24T03:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.066032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.066077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.066088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.066109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.066121 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.168918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.168985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.169004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.169032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.169052 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.272006 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.272060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.272072 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.272092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.272105 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.405375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.405450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.405475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.405504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.405523 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.508912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.508980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.509001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.509028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.509049 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.612322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.612380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.612397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.612422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.612439 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.618817 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:25:35.650701662 +0000 UTC Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.658337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:32 crc kubenswrapper[4772]: E0124 03:42:32.658553 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.714810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.714888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.714912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.714945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.714971 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.817968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.818040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.818054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.818073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.818083 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.920495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.920555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.920573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.920597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:32 crc kubenswrapper[4772]: I0124 03:42:32.920614 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:32Z","lastTransitionTime":"2026-01-24T03:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.023240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.023310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.023328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.023357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.023380 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.126733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.126794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.126804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.126821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.126831 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.229654 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.229724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.229756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.229778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.229788 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.332341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.332421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.332445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.332471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.332489 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.435977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.436031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.436040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.436057 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.436068 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.539564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.539620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.539639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.539674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.539692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.619437 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:03:32.521098308 +0000 UTC Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.642359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.642383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.642392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.642405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.642414 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.657904 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:33 crc kubenswrapper[4772]: E0124 03:42:33.658093 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.658730 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:33 crc kubenswrapper[4772]: E0124 03:42:33.658930 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.659227 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:33 crc kubenswrapper[4772]: E0124 03:42:33.659387 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.672406 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.686839 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.702485 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.727399 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.743315 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.744575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.744610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.744622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.744640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.744653 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.760040 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.780395 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.796390 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.818690 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.836370 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.869541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.869588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.869605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.869627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.869642 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.878881 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.909136 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.925924 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.939076 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.957661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.967730 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:33Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.971573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.971724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.971856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.971977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:33 crc kubenswrapper[4772]: I0124 03:42:33.972098 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:33Z","lastTransitionTime":"2026-01-24T03:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.074271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.074646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.074771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.074898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.075032 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.177812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.178062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.178138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.178210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.178276 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.280311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.280585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.280714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.280838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.280919 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.384227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.385014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.385120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.385209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.385286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.487887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.487971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.487991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.488019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.488037 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.591173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.591223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.591232 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.591249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.591266 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.619929 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:43:23.053347509 +0000 UTC Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.658440 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:34 crc kubenswrapper[4772]: E0124 03:42:34.658648 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.694489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.694538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.694552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.694575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.694587 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.797728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:34 crc kubenswrapper[4772]: E0124 03:42:34.797913 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:34 crc kubenswrapper[4772]: E0124 03:42:34.797985 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:42:50.797962141 +0000 UTC m=+67.835052876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.798579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.798636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.798660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.798692 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.798717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.901532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.901580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.901590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.901606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:34 crc kubenswrapper[4772]: I0124 03:42:34.901616 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:34Z","lastTransitionTime":"2026-01-24T03:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.004559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.004612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.004622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.004638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.004649 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.107386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.107452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.107470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.107497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.107514 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.210946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.211002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.211020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.211047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.211072 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.303773 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.304084 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:43:07.304032284 +0000 UTC m=+84.341123039 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.304267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.304440 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.304528 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:43:07.304512767 +0000 UTC m=+84.341603522 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.313983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.314078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.314104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.314139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.314161 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.406037 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.406124 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.406164 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406317 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406393 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:43:07.406370117 +0000 UTC m=+84.443460882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406705 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406729 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406783 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.406830 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:43:07.406815919 +0000 UTC m=+84.443906684 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.407065 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.407122 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.407140 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.407232 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:43:07.4072092 +0000 UTC m=+84.444299935 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.416730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.416866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.416884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.416901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.416917 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.519256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.519619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.519637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.519658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.519672 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.620444 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:13:36.249590745 +0000 UTC Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.622235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.622264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.622274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.622288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.622300 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.658915 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.659046 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.659054 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.658915 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.659429 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:35 crc kubenswrapper[4772]: E0124 03:42:35.659497 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.726644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.726718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.726780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.726817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.726883 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.830265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.830326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.830349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.830378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.830395 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.933814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.933878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.933896 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.933921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:35 crc kubenswrapper[4772]: I0124 03:42:35.933938 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:35Z","lastTransitionTime":"2026-01-24T03:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.036791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.036845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.036864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.036892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.036912 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.140127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.140197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.140219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.140252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.140277 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.242782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.242873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.242902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.242935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.242959 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.345519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.345570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.345589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.345613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.345631 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.448674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.448806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.448835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.448875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.448901 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.552536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.552621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.552640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.552669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.552689 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.621600 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:39:43.708288895 +0000 UTC Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.656483 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.656563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.656587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.656622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.656646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.658019 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:36 crc kubenswrapper[4772]: E0124 03:42:36.658220 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.760441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.760504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.760522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.760552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.760571 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.864120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.864196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.864219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.864247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.864267 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.968021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.968101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.968125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.968160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:36 crc kubenswrapper[4772]: I0124 03:42:36.968178 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:36Z","lastTransitionTime":"2026-01-24T03:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.072292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.072408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.072467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.072497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.072519 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.176090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.176162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.176226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.176258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.176280 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.279663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.279702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.279712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.279727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.279751 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.382997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.383092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.383113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.383170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.383189 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.486208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.486277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.486294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.486318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.486335 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.589395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.589462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.589489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.589525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.589553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.621812 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:56:36.211420693 +0000 UTC Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.658001 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.658114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.658193 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:37 crc kubenswrapper[4772]: E0124 03:42:37.658184 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:37 crc kubenswrapper[4772]: E0124 03:42:37.658323 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:37 crc kubenswrapper[4772]: E0124 03:42:37.658459 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.692078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.692133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.692151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.692175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.692192 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.796054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.796108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.796125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.796149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.796167 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.900236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.900292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.900311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.900335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:37 crc kubenswrapper[4772]: I0124 03:42:37.900353 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:37Z","lastTransitionTime":"2026-01-24T03:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.002894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.002962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.002981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.003007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.003027 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.105697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.105805 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.105839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.105875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.105896 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.209019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.209093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.209118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.209148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.209168 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.312543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.312607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.312628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.312656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.312674 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.415891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.415962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.415989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.416021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.416044 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.518851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.518911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.518930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.518955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.518972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622030 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:47:57.29041905 +0000 UTC Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.622213 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.658360 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:38 crc kubenswrapper[4772]: E0124 03:42:38.658678 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.725513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.725571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.725588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.725613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.725632 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.828852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.828904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.828923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.828953 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.828973 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.933188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.933252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.933275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.933307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:38 crc kubenswrapper[4772]: I0124 03:42:38.933329 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:38Z","lastTransitionTime":"2026-01-24T03:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.037168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.037249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.037267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.037292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.037313 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.112276 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.124308 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.139080 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.140377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.140432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.140452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.140475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.140493 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.162186 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.183046 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.210696 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.230869 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.243716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.243803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.243822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.243850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.243868 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.251724 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.273105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.292446 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.310908 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.338523 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.347365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.347401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.347412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.347432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.347444 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.355993 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.378304 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.395073 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.407915 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.427248 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.446449 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:39Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.451211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.451257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.451270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.451292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.451305 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.555159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.555237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.555262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.555296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.555319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.622544 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:00:00.06758656 +0000 UTC Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.657859 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.657901 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:39 crc kubenswrapper[4772]: E0124 03:42:39.658006 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.658087 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:39 crc kubenswrapper[4772]: E0124 03:42:39.658238 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:39 crc kubenswrapper[4772]: E0124 03:42:39.658502 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.659598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.659650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.659669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.659695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.659793 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.763656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.763710 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.763719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.763751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.763764 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.866940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.867019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.867043 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.867072 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.867093 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.970572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.970625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.970640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.970666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:39 crc kubenswrapper[4772]: I0124 03:42:39.970684 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:39Z","lastTransitionTime":"2026-01-24T03:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.072915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.072957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.072999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.073027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.073043 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.175633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.175711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.175778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.175816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.175842 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.278253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.278296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.278307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.278327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.278338 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.381284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.381329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.381346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.381364 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.381379 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.484715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.484777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.484789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.484807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.484821 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.588087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.588143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.588154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.588174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.588188 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.623588 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:22:30.680778236 +0000 UTC Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.658295 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.658487 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.690221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.690271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.690283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.690301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.690315 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.710353 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.715278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.715312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.715323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.715341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.715352 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.733083 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.738989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.739256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.739426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.739613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.739796 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.761101 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.766618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.766678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.766696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.766725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.766777 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.787554 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.793516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.793728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.794120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.794402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.794622 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.818545 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:40Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:40 crc kubenswrapper[4772]: E0124 03:42:40.818819 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.821570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.821692 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.821719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.821783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.821805 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.925553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.925872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.925897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.925966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:40 crc kubenswrapper[4772]: I0124 03:42:40.925991 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:40Z","lastTransitionTime":"2026-01-24T03:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.028988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.029048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.029067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.029093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.029110 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.132875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.132942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.132962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.132987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.133006 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.237139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.237201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.237219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.237242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.237261 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.340595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.340664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.340689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.340716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.340789 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.443579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.443984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.444143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.444315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.444487 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.547729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.547782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.547791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.547808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.547818 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.624531 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:02:25.153117479 +0000 UTC Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.651497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.651570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.651595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.651631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.651654 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.658890 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.658952 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:41 crc kubenswrapper[4772]: E0124 03:42:41.659106 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.659162 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:41 crc kubenswrapper[4772]: E0124 03:42:41.659370 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:41 crc kubenswrapper[4772]: E0124 03:42:41.659476 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.755451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.755527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.755551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.755575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.755600 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.858959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.859353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.859495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.859648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.859821 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.962950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.963016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.963034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.963093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:41 crc kubenswrapper[4772]: I0124 03:42:41.963116 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:41Z","lastTransitionTime":"2026-01-24T03:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.068311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.068874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.068898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.068926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.068945 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.172333 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.172430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.172448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.172476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.172497 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.276151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.276198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.276210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.276226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.276237 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.378666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.378717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.378727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.378756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.378769 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.481021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.481328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.481577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.481831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.481936 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.585241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.585699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.585884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.586116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.586314 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.624680 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:33:01.775461618 +0000 UTC Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.658122 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:42 crc kubenswrapper[4772]: E0124 03:42:42.658331 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.689125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.689179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.689197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.689221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.689237 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.791699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.792157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.792357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.792583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.792813 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.895700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.896061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.896244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.896423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.896587 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:42Z","lastTransitionTime":"2026-01-24T03:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.999557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:42 crc kubenswrapper[4772]: I0124 03:42:42.999887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.000017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.000183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.000322 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.103388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.103436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.103448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.103468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.103484 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.206209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.206672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.206919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.207091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.207236 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.310215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.310574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.311019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.311366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.311678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.414522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.414590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.414612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.414638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.414657 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.518529 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.518606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.518624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.518648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.518666 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.621395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.621452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.621468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.621492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.621508 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.624880 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:08:47.064025156 +0000 UTC Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.657930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:43 crc kubenswrapper[4772]: E0124 03:42:43.658185 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.658254 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.658322 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:43 crc kubenswrapper[4772]: E0124 03:42:43.659203 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.659621 4772 scope.go:117] "RemoveContainer" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" Jan 24 03:42:43 crc kubenswrapper[4772]: E0124 03:42:43.659654 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:43 crc kubenswrapper[4772]: E0124 03:42:43.659982 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.677716 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.697642 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.718452 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.724614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.724675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.724691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.724716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.724733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.732489 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.753560 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.776490 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.794055 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.807339 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.820848 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.827344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.827535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.827637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.827761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.827873 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.837902 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.855884 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.868931 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.882803 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.901697 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.915533 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.930482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.930516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.930528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.930545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.930557 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:43Z","lastTransitionTime":"2026-01-24T03:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.938094 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:43 crc kubenswrapper[4772]: I0124 03:42:43.954284 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:43Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.033365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.033415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.033427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.033449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.033461 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.137094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.137141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.137153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.137171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.137183 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.239852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.239898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.239922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.239943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.239957 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.343101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.343149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.343167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.343191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.343208 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.445575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.445643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.445662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.445689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.445707 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.548371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.548434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.548454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.548483 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.548502 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.625013 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:44:53.778729354 +0000 UTC Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.651617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.651708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.651730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.651788 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.651806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.658014 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:44 crc kubenswrapper[4772]: E0124 03:42:44.658250 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.755259 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.755316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.755328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.755348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.755361 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.858260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.858335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.858359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.858385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.858403 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.960711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.960804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.960823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.960848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:44 crc kubenswrapper[4772]: I0124 03:42:44.960870 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:44Z","lastTransitionTime":"2026-01-24T03:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.064618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.065237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.065461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.065870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.066027 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.168839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.169138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.169311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.169460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.169590 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.273212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.273285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.273303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.273338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.273357 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.376776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.376842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.376861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.376887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.376906 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.480507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.480902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.481319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.481645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.482103 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.585492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.585792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.587825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.587873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.587906 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.625207 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:40:49.249121923 +0000 UTC Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.658935 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.658991 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.659063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:45 crc kubenswrapper[4772]: E0124 03:42:45.659147 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:45 crc kubenswrapper[4772]: E0124 03:42:45.659312 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:45 crc kubenswrapper[4772]: E0124 03:42:45.659382 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.690337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.690396 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.690414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.690440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.690462 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.793617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.793679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.793697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.793722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.793769 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.899187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.899279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.899304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.899380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:45 crc kubenswrapper[4772]: I0124 03:42:45.899420 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:45Z","lastTransitionTime":"2026-01-24T03:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.002388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.002447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.002467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.002491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.002509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.105356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.105428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.105447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.105472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.105490 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.207951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.208024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.208044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.208070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.208088 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.310949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.311026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.311045 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.311075 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.311093 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.414375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.414438 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.414460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.414488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.414508 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.517886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.517942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.517957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.517979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.517993 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.621219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.621268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.621286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.621308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.621324 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.626227 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:14:08.766766932 +0000 UTC Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.657974 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:46 crc kubenswrapper[4772]: E0124 03:42:46.658187 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.724718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.724797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.724815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.724837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.724854 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.828191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.828269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.828295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.828330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.828357 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.931015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.931075 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.931094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.931124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:46 crc kubenswrapper[4772]: I0124 03:42:46.931141 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:46Z","lastTransitionTime":"2026-01-24T03:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.034193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.034240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.034257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.034280 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.034297 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.137315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.137377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.137394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.137419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.137439 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.239933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.240000 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.240019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.240084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.240103 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.342374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.342433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.342442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.342461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.342473 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.444814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.444858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.444867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.444881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.444891 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.548435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.548493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.548510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.548535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.548553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.627370 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:09:39.72001157 +0000 UTC Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.651836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.651905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.651924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.651952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.651970 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.658287 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.658461 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:47 crc kubenswrapper[4772]: E0124 03:42:47.658618 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.658467 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:47 crc kubenswrapper[4772]: E0124 03:42:47.658965 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:47 crc kubenswrapper[4772]: E0124 03:42:47.659044 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.754715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.754824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.754842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.754874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.754894 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.858641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.858715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.858767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.858802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.858829 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.962237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.962283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.962293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.962311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:47 crc kubenswrapper[4772]: I0124 03:42:47.962321 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:47Z","lastTransitionTime":"2026-01-24T03:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.065928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.066269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.066349 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.066434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.066512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.169366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.169439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.169456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.169481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.169499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.272955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.273030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.273048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.273076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.273096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.375590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.375676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.375696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.375728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.375787 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.478896 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.479255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.479413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.479553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.479798 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.584041 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.584079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.584091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.584107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.584119 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.628461 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:00:37.424032957 +0000 UTC Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.658075 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:48 crc kubenswrapper[4772]: E0124 03:42:48.658295 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.686497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.686541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.686555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.686572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.686582 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.790163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.790201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.790212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.790229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.790241 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.893182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.893256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.893279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.893311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.893333 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.996715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.996827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.996850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.996883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:48 crc kubenswrapper[4772]: I0124 03:42:48.996905 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:48Z","lastTransitionTime":"2026-01-24T03:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.098877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.098936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.098949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.098970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.098983 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.202444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.202503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.202511 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.202526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.202534 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.306217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.306572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.306825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.307059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.307260 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.410482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.410515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.410525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.410566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.410578 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.514437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.514495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.514515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.514539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.514558 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.618127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.618188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.618198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.618218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.618228 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.629967 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:46:11.114193667 +0000 UTC Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.658771 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.658845 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.658873 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:49 crc kubenswrapper[4772]: E0124 03:42:49.659562 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:49 crc kubenswrapper[4772]: E0124 03:42:49.659690 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:49 crc kubenswrapper[4772]: E0124 03:42:49.659858 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.720760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.720824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.720836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.720857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.720869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.824578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.824670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.824684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.824704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.824717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.927836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.927878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.927889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.927908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:49 crc kubenswrapper[4772]: I0124 03:42:49.927918 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:49Z","lastTransitionTime":"2026-01-24T03:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.031185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.031228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.031239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.031254 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.031265 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.133089 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.133152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.133168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.133185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.133197 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.235371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.235416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.235425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.235443 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.235454 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.338059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.338090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.338100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.338114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.338123 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.440866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.440911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.440921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.440937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.440950 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.544228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.544275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.544290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.544309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.544322 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.630994 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:01:54.780078341 +0000 UTC Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.646299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.646358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.646383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.646410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.646432 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.658514 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.658704 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.749497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.749580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.749600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.749626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.749646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.852592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.852644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.852653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.852676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.852690 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.888076 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.888219 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.888278 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:43:22.888262433 +0000 UTC m=+99.925353158 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.899225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.899256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.899265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.899284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.899296 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.918336 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:50Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.922995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.923038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.923049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.923066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.923075 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.942563 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:50Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.947018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.947052 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.947064 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.947080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.947089 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.965227 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:50Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.969960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.970022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.970034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.970050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.970060 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:50 crc kubenswrapper[4772]: E0124 03:42:50.982236 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:50Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.985851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.985879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.985889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.985904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:50 crc kubenswrapper[4772]: I0124 03:42:50.985914 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:50Z","lastTransitionTime":"2026-01-24T03:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: E0124 03:42:51.003345 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:51Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:51 crc kubenswrapper[4772]: E0124 03:42:51.003460 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.005228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.005253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.005262 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.005278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.005289 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.107723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.107785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.107794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.107810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.107823 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.210666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.210707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.210718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.210751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.210761 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.313591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.313621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.313629 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.313646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.313655 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.415785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.415822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.415831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.415844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.415855 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.518883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.518937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.518954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.518978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.518996 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.621610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.621671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.621693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.621723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.621781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.632008 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:49:23.909078535 +0000 UTC Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.658162 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:51 crc kubenswrapper[4772]: E0124 03:42:51.658311 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.658584 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:51 crc kubenswrapper[4772]: E0124 03:42:51.658685 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.658796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:51 crc kubenswrapper[4772]: E0124 03:42:51.658947 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.724456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.724506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.724518 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.724537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.724550 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.826809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.826853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.826865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.826880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.826891 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.929094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.929127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.929135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.929149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:51 crc kubenswrapper[4772]: I0124 03:42:51.929157 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:51Z","lastTransitionTime":"2026-01-24T03:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.031898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.031951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.031969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.031991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.032008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.109237 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/0.log" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.109310 4772 generic.go:334] "Generic (PLEG): container finished" podID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" containerID="ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0" exitCode=1 Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.109351 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerDied","Data":"ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.109961 4772 scope.go:117] "RemoveContainer" containerID="ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.130169 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.141420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.141451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.141460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.141474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.141485 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.151968 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.174021 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.188168 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.206077 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.224088 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.239119 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.242890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.242912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.242920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.242933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.242942 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.254426 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.270717 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.287493 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.298485 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.311592 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.329227 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.344381 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.345430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.345507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.345531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.345562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.345586 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.361120 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.375353 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.393236 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:52Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.447995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.448039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.448050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.448066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.448075 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.551208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.551255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.551268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.551285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.551305 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.632776 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:21:47.794874508 +0000 UTC Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.654074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.654155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.654175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.654205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.654222 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.658406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:52 crc kubenswrapper[4772]: E0124 03:42:52.658604 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.756773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.756849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.756870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.756899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.756917 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.859525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.859587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.859605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.859629 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.859645 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.962782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.962830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.962842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.962862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:52 crc kubenswrapper[4772]: I0124 03:42:52.962876 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:52Z","lastTransitionTime":"2026-01-24T03:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.066155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.066217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.066234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.066258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.066277 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.114718 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/0.log" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.114792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerStarted","Data":"358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.130174 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.147914 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.161864 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.168779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.168822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.168831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.168847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.168857 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.180148 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.197846 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.212540 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.229618 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.244107 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.256287 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.267307 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.271646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.271693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.271713 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.271782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.271803 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.285794 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.296160 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.308512 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.324509 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.338471 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.354259 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.370244 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.373784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.373806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.373816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.373829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.373839 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.476708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.476772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.476785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.476806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.476821 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.579256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.579309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.579324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.579346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.579458 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.633840 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:07:37.907370397 +0000 UTC Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.658439 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.658519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.658572 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:53 crc kubenswrapper[4772]: E0124 03:42:53.658692 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:53 crc kubenswrapper[4772]: E0124 03:42:53.658828 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:53 crc kubenswrapper[4772]: E0124 03:42:53.658923 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.675290 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.684123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.684172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.684186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.684207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.684226 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.688478 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.700023 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.714204 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.729474 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.742544 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.756405 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.774206 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.786824 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.787267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.787407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.787508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.787614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.787694 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.800516 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.817653 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.829165 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.842575 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.857826 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.868691 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.881185 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.889484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.889510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.889519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.889533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.889545 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.891085 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:53Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.991146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.991340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.991445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.991526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:53 crc kubenswrapper[4772]: I0124 03:42:53.991631 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:53Z","lastTransitionTime":"2026-01-24T03:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.093708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.093983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.094054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.094125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.094184 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.196492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.196766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.196839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.196899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.196960 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.300970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.301210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.301321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.301424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.301503 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.403932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.404260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.404365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.404509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.404631 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.506664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.506690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.506698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.506712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.506721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.608326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.608367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.608380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.608400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.608412 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.634835 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:42:06.612637483 +0000 UTC Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.657855 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:54 crc kubenswrapper[4772]: E0124 03:42:54.657962 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.711359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.711418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.711436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.711460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.711476 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.814409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.814454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.814468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.814486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.814497 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.916943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.917011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.917029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.917053 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:54 crc kubenswrapper[4772]: I0124 03:42:54.917071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:54Z","lastTransitionTime":"2026-01-24T03:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.019488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.019718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.019846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.019925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.020008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.122535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.122574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.122584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.122600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.122611 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.224295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.224331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.224343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.224355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.224366 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.326760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.326797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.326808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.326823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.326833 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.428556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.428583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.428591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.428603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.428611 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.533778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.533811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.533821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.533836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.533848 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635281 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:52:43.208431545 +0000 UTC Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.635974 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.658953 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.659028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.659296 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:55 crc kubenswrapper[4772]: E0124 03:42:55.659477 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:55 crc kubenswrapper[4772]: E0124 03:42:55.659665 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.659723 4772 scope.go:117] "RemoveContainer" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" Jan 24 03:42:55 crc kubenswrapper[4772]: E0124 03:42:55.659798 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.737867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.737909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.737920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.737938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.737949 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.839785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.839812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.839821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.839836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.839844 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.941373 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.941408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.941418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.941431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:55 crc kubenswrapper[4772]: I0124 03:42:55.941439 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:55Z","lastTransitionTime":"2026-01-24T03:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.043845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.043883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.043892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.043906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.043915 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.123774 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/2.log" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.126414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.126793 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.138111 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.145819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.145858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.145871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.145885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.145895 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.148963 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.161018 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.171297 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.187811 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.202053 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.214441 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.227621 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.241955 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.248625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.248661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.248670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.248685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.248696 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.253817 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.266552 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.284227 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.297187 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.316600 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.332284 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.344994 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.350213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.350234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.350242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.350254 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.350262 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.365441 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:56Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.452004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.452047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.452061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.452205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.452216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.554631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.554669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.554679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.554698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.554708 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.635388 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:54:45.410327311 +0000 UTC Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.657724 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:56 crc kubenswrapper[4772]: E0124 03:42:56.657835 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.658076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.658230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.658381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.658545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.658727 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.761537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.761589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.761600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.761617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.761629 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.863959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.864009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.864018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.864031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.864041 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.967037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.967073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.967082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.967097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:56 crc kubenswrapper[4772]: I0124 03:42:56.967105 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:56Z","lastTransitionTime":"2026-01-24T03:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.069431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.069474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.069520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.069540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.069552 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.130697 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/3.log" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.131883 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/2.log" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.135031 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" exitCode=1 Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.135085 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.135131 4772 scope.go:117] "RemoveContainer" containerID="aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.136358 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:42:57 crc kubenswrapper[4772]: E0124 03:42:57.136503 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.150975 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.170977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.171010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.171018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.171033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.171042 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.171067 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.192879 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.209887 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.224797 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.247375 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.266624 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.273720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.273815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.273835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.273861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.273878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.286048 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaf339e7778a04d3e6920e46855935f86346ee63a9667a18b787b6ead1219f37\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:29Z\\\",\\\"message\\\":\\\"ock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 03:42:29.618311 6386 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clust\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:56Z\\\",\\\"message\\\":\\\"iagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f openshift-image-registry/node-ca-gpkkg openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-bnn82 openshift-multus/multus-additional-cni-plugins-jvgzj openshift-network-operator/iptables-alerter-4ln5h openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/network-metrics-daemon-mpdb8]\\\\nI0124 03:42:56.454220 6781 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0124 03:42:56.454236 6781 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-mpdb8 before timer (time: 2026-01-24 03:42:57.494588062 +0000 UTC m=+1.568126136): skip\\\\nI0124 03:42:56.454261 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 03:42:56.454317 6781 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.300795 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.317892 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.338428 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.350494 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.363719 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.373858 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.375686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.375725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.375754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.375771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.375780 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.384464 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.395925 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.404610 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:57Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.482560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.483125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.483140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.483157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.483167 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.586454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.586498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.586515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.586539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.586556 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.636355 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:46:44.178594407 +0000 UTC Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.657845 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.657911 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:57 crc kubenswrapper[4772]: E0124 03:42:57.657987 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:57 crc kubenswrapper[4772]: E0124 03:42:57.658073 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.658102 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:57 crc kubenswrapper[4772]: E0124 03:42:57.658175 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.688654 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.688681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.688691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.688709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.688721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.791246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.791295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.791307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.791323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.791335 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.893401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.893449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.893464 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.893487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.893502 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.995681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.995708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.995717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.995729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:57 crc kubenswrapper[4772]: I0124 03:42:57.995758 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:57Z","lastTransitionTime":"2026-01-24T03:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.098435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.098488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.098504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.098527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.098544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.139717 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/3.log" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.143983 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:42:58 crc kubenswrapper[4772]: E0124 03:42:58.144133 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.162472 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.184690 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.200610 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.201709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.201764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.201774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.201787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.201796 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.219716 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.239767 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.257988 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.286893 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.304937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.305003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.305021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.305049 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.305072 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.308336 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.325844 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.341124 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.357152 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.375965 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.394810 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.407621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.407686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.407704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.407730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.407845 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.418938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:56Z\\\",\\\"message\\\":\\\"iagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f openshift-image-registry/node-ca-gpkkg openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-bnn82 openshift-multus/multus-additional-cni-plugins-jvgzj openshift-network-operator/iptables-alerter-4ln5h openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/network-metrics-daemon-mpdb8]\\\\nI0124 03:42:56.454220 6781 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0124 03:42:56.454236 6781 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-mpdb8 before timer (time: 2026-01-24 03:42:57.494588062 +0000 UTC m=+1.568126136): skip\\\\nI0124 03:42:56.454261 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 03:42:56.454317 6781 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.438767 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.458470 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.476665 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:42:58Z is after 2025-08-24T17:21:41Z" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.510461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.510533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.510555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.510584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.510602 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.613523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.613574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.613587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.613603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.613613 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.638167 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:00:20.366297613 +0000 UTC Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.659104 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:42:58 crc kubenswrapper[4772]: E0124 03:42:58.659325 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.716449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.716522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.716548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.716576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.716599 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.819846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.819914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.819936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.819964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.819989 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.923288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.923329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.923338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.923350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:58 crc kubenswrapper[4772]: I0124 03:42:58.923358 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:58Z","lastTransitionTime":"2026-01-24T03:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.026452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.026521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.026539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.026563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.026581 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.129261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.129332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.129350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.129374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.129391 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.232391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.232456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.232475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.232500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.232521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.335327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.335391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.335409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.335433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.335451 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.439200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.439263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.439283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.439310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.439331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.542471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.542589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.542614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.542651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.542675 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.638730 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:14:00.792040869 +0000 UTC Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.646308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.646345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.646359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.646378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.646392 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.658257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:42:59 crc kubenswrapper[4772]: E0124 03:42:59.658709 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.658377 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:42:59 crc kubenswrapper[4772]: E0124 03:42:59.659464 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.658329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:42:59 crc kubenswrapper[4772]: E0124 03:42:59.660114 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.749513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.749944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.750194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.750368 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.750781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.854171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.854503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.854657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.854921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.855123 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.958477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.958551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.958576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.958609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:42:59 crc kubenswrapper[4772]: I0124 03:42:59.958634 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:42:59Z","lastTransitionTime":"2026-01-24T03:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.060883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.060944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.060962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.060988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.061007 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.164135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.164460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.164579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.164665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.164765 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.266765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.267104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.267190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.267266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.267365 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.369809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.369844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.369853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.369865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.369874 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.472195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.472257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.472275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.472299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.472318 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.575303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.575347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.575355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.575370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.575380 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.639820 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:04:53.808310696 +0000 UTC Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.658312 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:00 crc kubenswrapper[4772]: E0124 03:43:00.658523 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.678719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.678794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.678812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.678835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.678850 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.780887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.780932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.780945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.780964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.780976 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.883531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.883586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.883604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.883626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.883643 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.986700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.986752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.986761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.986775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:00 crc kubenswrapper[4772]: I0124 03:43:00.986785 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:00Z","lastTransitionTime":"2026-01-24T03:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.088872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.089158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.089317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.089389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.089470 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.191166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.191237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.191257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.191289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.191312 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.297720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.298198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.298457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.298685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.298926 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.316234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.316270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.316282 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.316299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.316311 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.330057 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:01Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.333823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.333845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.333855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.333866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.333873 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.345216 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:01Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.348546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.348595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.348612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.348632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.348700 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.364334 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:01Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.367749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.367778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.367787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.367800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.367809 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.378657 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:01Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.382277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.382302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.382310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.382324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.382334 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.397788 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:01Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.397949 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.400867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.400900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.400913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.400930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.400942 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.503779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.504141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.504295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.504478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.504652 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.607099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.607151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.607161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.607175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.607184 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.640767 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:21:48.971844497 +0000 UTC Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.658114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.658164 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.658221 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.658351 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.658420 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:01 crc kubenswrapper[4772]: E0124 03:43:01.658645 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.708818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.708850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.708858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.708870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.708878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.812082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.812124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.812133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.812149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.812158 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.914961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.915365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.915513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.915650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:01 crc kubenswrapper[4772]: I0124 03:43:01.915877 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:01Z","lastTransitionTime":"2026-01-24T03:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.019210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.019286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.019313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.019347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.019371 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.122828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.122887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.122903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.122923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.122935 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.226177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.226570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.226716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.226986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.227133 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.330378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.330440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.330463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.330487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.330505 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.434179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.434258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.434283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.434314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.434337 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.537683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.537789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.537810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.537835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.537887 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641051 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:25:36.504408085 +0000 UTC Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.641355 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.658804 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:02 crc kubenswrapper[4772]: E0124 03:43:02.659027 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.743672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.743766 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.743784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.743894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.743913 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.847276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.847329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.847346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.847371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.847389 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.950905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.950961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.950978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.951002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:02 crc kubenswrapper[4772]: I0124 03:43:02.951019 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:02Z","lastTransitionTime":"2026-01-24T03:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.054217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.054261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.054278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.054302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.054319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.158710 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.158831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.158852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.158882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.158910 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.262561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.262625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.262648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.262676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.262698 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.366327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.366695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.366886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.367059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.367214 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.471233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.471275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.471286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.471303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.471315 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.574857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.574960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.574983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.575016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.575039 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.641816 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:51:07.935634233 +0000 UTC Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.658534 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.658704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.659059 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:03 crc kubenswrapper[4772]: E0124 03:43:03.659031 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:03 crc kubenswrapper[4772]: E0124 03:43:03.659414 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:03 crc kubenswrapper[4772]: E0124 03:43:03.659525 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.678806 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8311b11-97fe-4657-add7-66fd66adc69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2xrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mpdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.679235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.679274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.679290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.679315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.679332 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.698446 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kqp8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:51Z\\\",\\\"message\\\":\\\"2026-01-24T03:42:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54\\\\n2026-01-24T03:42:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b1ed58f-bb70-452f-ad5a-d1b74b4a6b54 to /host/opt/cni/bin/\\\\n2026-01-24T03:42:06Z [verbose] multus-daemon started\\\\n2026-01-24T03:42:06Z [verbose] Readiness Indicator file check\\\\n2026-01-24T03:42:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kqp8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.717155 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sldpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"729441e2-b077-4b39-921e-742c199ff8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3b8dac7939fc72dc36f70365b99c5a7d2adbe788a0ec434539998ad0f87469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfjrc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sldpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.740648 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a6ed09-b342-405d-ba5e-52d60ecfec68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b9bd0d2596bcd928b1566356d89c7435b813cc42c067d2ac6fe7b5c12eb719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4053c3d991e7e6feddb88000dcd2591ecb9917d86414318ce6364d25e42cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ac4e7c04436ac5ba0bb4242a7c98d6c5dd62ada33ca1e8e80bf73ca8521c67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726063213cfd173eb04a01d3ec504dcac9009a9a8b609ccd54ca22e434916604\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c0758c931ba72761f8a2f6042d751d7878b9f78933b035ecb137e8d44815c0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2cf77f814b165f64d1673c139e4ddfd9244b3c3137e0459bc5b0699e7d4b70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://948027c83583ed9a564eddb11ec7c36b99bf907b8cd778482b06b2a76424ada4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qp8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jvgzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.759341 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpkkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d4108fb-011c-4894-9f56-25a4d59d67cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49a3b5f5b89f94045bd77649e9cd3fded7a743b21ab1e4ec23adcbbd09776aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8rvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpkkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.779570 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64e69dca02c50348652f30e19705cfa24f6be36ad7a275ebbdfd01223efdbc9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.783660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.783717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.783778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.783814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.783838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.799472 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d65426d-9ece-4080-84e0-398c24a76c30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 03:41:57.173522 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 03:41:57.175059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4068491531/tls.crt::/tmp/serving-cert-4068491531/tls.key\\\\\\\"\\\\nI0124 03:42:03.300481 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 03:42:03.306787 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 03:42:03.306835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 03:42:03.306895 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 03:42:03.306909 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 03:42:03.316593 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 03:42:03.316649 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 03:42:03.316669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 03:42:03.316676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 03:42:03.316681 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 03:42:03.316689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0124 03:42:03.316895 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0124 03:42:03.323416 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.839142 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d12fd4b-f449-41fb-8fd4-40e041fe5604\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://284a2ce8a45e83e309093857f3e3391e4c8448d687afd6e870a0bdc270d1a72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c6a8506183de907f85811652beec68899f69f6acfa6d95957f3452810c2be28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d512cfe11313f8d17fff6ce93fc36c5fab75eb92299fc7c348e17b8671c8ba0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.882443 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f56473f-81bd-4479-8375-c173d2ef3729\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585cbbdf8821dbbe17b997bdcfd0714778e53eb9bfa3c394f60141ef2af6db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://198a675b05c73174d44b3cd42c8e36540d00bfcba9ec3629299ea4a7ef805532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb51274e03f7f8168381fbd7872854726f3e6d65f4c6dcd4e5f3e4c0985be643\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0d18022623fc550f7b8bd4da76816b88951cdcca6534f936857af9ede5ffbf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:41:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:41:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.886345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.886406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.886422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.886444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.886463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.899521 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.913771 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bddf4112533a4279080fecf63197911a398e59c055fab3eb96f54dfcf53bd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.925360 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e35da769c143952d15ff230919a20d53369bbb372931f182c71d3aab37135e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29djk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnn82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.937414 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcaf254-b568-4170-8068-e55bd06685a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84d62c8d005679d9a7915d426c1193e2bafc7486347049feaacaca14d10c27f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://920596e5bf8d82f1252d2ee3294b69a92c46cf64b14acb2aece092cbaca06ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwvmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ljl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.949693 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.963385 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.982527 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"849e85f7-2aca-4f00-a9be-a5f40979ad26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T03:42:56Z\\\",\\\"message\\\":\\\"iagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f openshift-image-registry/node-ca-gpkkg openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-bnn82 openshift-multus/multus-additional-cni-plugins-jvgzj openshift-network-operator/iptables-alerter-4ln5h openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/network-metrics-daemon-mpdb8]\\\\nI0124 03:42:56.454220 6781 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0124 03:42:56.454236 6781 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-mpdb8 before timer (time: 2026-01-24 03:42:57.494588062 +0000 UTC m=+1.568126136): skip\\\\nI0124 03:42:56.454261 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 03:42:56.454317 6781 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T03:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T03:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dn2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T03:42:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c46s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.988243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.988274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.988286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.988304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.988317 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:03Z","lastTransitionTime":"2026-01-24T03:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:03 crc kubenswrapper[4772]: I0124 03:43:03.996161 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T03:42:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://617d7b6305362c779d3d70a2e594f5d0c8180a5aaadc989726b3ecc58632d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400fd49ef24e5fc1290d8f5af30605f7ebef10ac2c87d17daff51c11df4c43d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T03:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:03Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.090802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.090851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.090863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.091015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.091027 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.193327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.193636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.193776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.193879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.194057 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.296665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.297069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.297226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.297339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.297433 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.399960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.400200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.400336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.400450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.400566 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.503860 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.504162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.504324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.504452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.504544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.607515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.607585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.607602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.607628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.607646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.642994 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:54:15.917252099 +0000 UTC Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.658719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:04 crc kubenswrapper[4772]: E0124 03:43:04.658910 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.710475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.710557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.710570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.710588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.710601 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.814643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.814714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.814780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.814830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.814855 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.918135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.918215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.918243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.918273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:04 crc kubenswrapper[4772]: I0124 03:43:04.918298 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:04Z","lastTransitionTime":"2026-01-24T03:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.021067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.021186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.021261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.021293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.021319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.124906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.124946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.124958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.124973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.124984 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.227330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.227488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.227520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.227554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.227580 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.330334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.330395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.330463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.330612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.330643 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.434360 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.434448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.434473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.434509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.434533 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.537153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.537186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.537224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.537242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.537253 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.639650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.639716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.639769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.639801 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.639825 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.643996 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:32:53.181177714 +0000 UTC Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.658407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:05 crc kubenswrapper[4772]: E0124 03:43:05.658562 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.658808 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.658902 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:05 crc kubenswrapper[4772]: E0124 03:43:05.659088 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:05 crc kubenswrapper[4772]: E0124 03:43:05.659160 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.742590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.742628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.742640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.742657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.742670 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.845726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.845810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.845828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.845881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.845900 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.948871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.948918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.948930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.948948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:05 crc kubenswrapper[4772]: I0124 03:43:05.948959 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:05Z","lastTransitionTime":"2026-01-24T03:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.051676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.051726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.051769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.051787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.051801 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.154699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.154768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.154786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.154806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.154820 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.257406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.257455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.257466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.257482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.257493 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.360491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.360550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.360570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.360596 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.360613 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.463397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.463461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.463484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.463514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.463537 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.566578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.566639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.566666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.566854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.566892 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.644326 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:41:48.328796499 +0000 UTC Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.658656 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:06 crc kubenswrapper[4772]: E0124 03:43:06.658877 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.670440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.670502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.670527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.670557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.670582 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.773595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.773665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.773688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.773719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.773778 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.876408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.876465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.876482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.876505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.876524 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.980130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.980177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.980195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.980217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:06 crc kubenswrapper[4772]: I0124 03:43:06.980235 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:06Z","lastTransitionTime":"2026-01-24T03:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.083130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.083179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.083196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.083218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.083234 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.186275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.186358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.186374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.186419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.186433 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.289222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.289267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.289283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.289306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.289323 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.374798 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.374985 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.374953648 +0000 UTC m=+148.412044403 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.375096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.375342 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.375450 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.375424842 +0000 UTC m=+148.412515607 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.392188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.392231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.392250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.392276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.392295 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.475943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.476405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.476607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.476137 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.477054 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.477019332 +0000 UTC m=+148.514110097 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.476539 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.477423 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.477624 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.477866 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.477847215 +0000 UTC m=+148.514937970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.476702 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.478251 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.478411 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.478609 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.478586396 +0000 UTC m=+148.515677161 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.495344 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.495399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.495417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.495445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.495463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.598658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.598707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.598727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.598809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.598833 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.645317 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:14:14.758034306 +0000 UTC Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.658942 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.659134 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.659186 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.659324 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.659383 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:07 crc kubenswrapper[4772]: E0124 03:43:07.659477 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.701786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.701834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.701853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.701877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.701895 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.804795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.804858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.804880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.804908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.804930 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.908009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.908050 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.908068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.908092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:07 crc kubenswrapper[4772]: I0124 03:43:07.908108 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:07Z","lastTransitionTime":"2026-01-24T03:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.011928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.012000 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.012018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.012044 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.012063 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.114686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.115129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.115335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.115586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.115838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.218828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.218886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.218904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.218927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.218945 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.322364 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.322433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.322451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.322542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.322564 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.425859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.425919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.425938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.425963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.425982 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.528656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.529186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.529399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.529572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.529759 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.633047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.633114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.633134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.633166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.633194 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.646352 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:19:03.977582063 +0000 UTC Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.659105 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:08 crc kubenswrapper[4772]: E0124 03:43:08.659947 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.736190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.736244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.736263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.736288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.736307 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.839418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.839873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.840029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.840213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.840371 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.943978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.944061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.944081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.944111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:08 crc kubenswrapper[4772]: I0124 03:43:08.944132 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:08Z","lastTransitionTime":"2026-01-24T03:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.048304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.048366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.048385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.048409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.048429 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.152858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.153635 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.153850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.154047 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.154400 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.258022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.258356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.258448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.258565 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.258692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.362555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.362652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.362675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.362707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.362724 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.465920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.466017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.466036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.466062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.466079 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.569768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.569837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.569883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.569917 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.569940 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.647462 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:42:26.172941242 +0000 UTC Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.657928 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.657986 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.658026 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:09 crc kubenswrapper[4772]: E0124 03:43:09.658461 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:09 crc kubenswrapper[4772]: E0124 03:43:09.658301 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:09 crc kubenswrapper[4772]: E0124 03:43:09.658691 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.672506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.672546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.672557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.672569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.672579 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.776048 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.776132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.776153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.776184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.776203 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.880024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.880093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.880113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.880138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.880156 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.983276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.983662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.983870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.984069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:09 crc kubenswrapper[4772]: I0124 03:43:09.984229 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:09Z","lastTransitionTime":"2026-01-24T03:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.087630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.087677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.087689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.087709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.087722 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.191844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.191966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.192025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.192059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.192122 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.296435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.296519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.296540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.296601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.296620 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.400345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.400432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.400460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.400497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.400521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.504647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.504722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.504779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.504812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.504838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.608181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.608265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.608292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.608328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.608354 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.647823 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:51:15.197723154 +0000 UTC Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.658294 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:10 crc kubenswrapper[4772]: E0124 03:43:10.658629 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.659661 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:43:10 crc kubenswrapper[4772]: E0124 03:43:10.659932 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.712396 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.712499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.712528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.712561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.712584 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.816684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.816771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.816790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.816818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.816835 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.920276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.920380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.920413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.920454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:10 crc kubenswrapper[4772]: I0124 03:43:10.920489 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:10Z","lastTransitionTime":"2026-01-24T03:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.024054 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.024120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.024140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.024168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.024186 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.127095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.127155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.127176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.127205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.127225 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.230811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.230879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.230904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.230943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.230966 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.334570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.334649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.334676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.334715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.334782 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.438715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.438810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.438830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.438859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.438878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.542700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.542867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.542895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.542925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.543014 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.646087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.646167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.646186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.646214 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.646233 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.648992 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:29:03.580047953 +0000 UTC Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.658488 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.658504 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.658644 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:11 crc kubenswrapper[4772]: E0124 03:43:11.658886 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:11 crc kubenswrapper[4772]: E0124 03:43:11.659059 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:11 crc kubenswrapper[4772]: E0124 03:43:11.659407 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.701874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.701937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.701955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.701979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.702000 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: E0124 03:43:11.723985 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T03:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d7b5535c-692f-4ec4-9db9-0dddf96ce11f\\\",\\\"systemUUID\\\":\\\"da4a4aba-e36f-4c4e-ae6d-d55fbd1e2268\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T03:43:11Z is after 2025-08-24T17:21:41Z" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.730841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.730929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.730952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.730991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.731017 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:43:11Z","lastTransitionTime":"2026-01-24T03:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.810505 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb"] Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.811006 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.813675 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.813892 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.814114 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.818658 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.823950 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f384043-ef67-48e9-9cd3-aacd4928e03b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.823994 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.824029 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.824052 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f384043-ef67-48e9-9cd3-aacd4928e03b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.824199 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f384043-ef67-48e9-9cd3-aacd4928e03b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.885805 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kqp8g" podStartSLOduration=67.885778578 podStartE2EDuration="1m7.885778578s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.885591542 +0000 UTC m=+88.922682277" watchObservedRunningTime="2026-01-24 03:43:11.885778578 +0000 UTC m=+88.922869343" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.900566 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sldpz" podStartSLOduration=67.900540932 podStartE2EDuration="1m7.900540932s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.900013247 +0000 UTC m=+88.937103982" watchObservedRunningTime="2026-01-24 03:43:11.900540932 +0000 UTC m=+88.937631697" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.923721 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jvgzj" podStartSLOduration=67.923690661 podStartE2EDuration="1m7.923690661s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.923257849 +0000 UTC m=+88.960348594" watchObservedRunningTime="2026-01-24 03:43:11.923690661 +0000 UTC m=+88.960781426" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.925620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.925697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f384043-ef67-48e9-9cd3-aacd4928e03b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.925770 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f384043-ef67-48e9-9cd3-aacd4928e03b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.925841 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.926048 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f384043-ef67-48e9-9cd3-aacd4928e03b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.926134 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.926226 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f384043-ef67-48e9-9cd3-aacd4928e03b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.928375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f384043-ef67-48e9-9cd3-aacd4928e03b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.936500 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gpkkg" podStartSLOduration=67.9364746 podStartE2EDuration="1m7.9364746s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.936355676 +0000 UTC m=+88.973446421" watchObservedRunningTime="2026-01-24 03:43:11.9364746 +0000 UTC m=+88.973565335" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.940865 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f384043-ef67-48e9-9cd3-aacd4928e03b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.950673 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f384043-ef67-48e9-9cd3-aacd4928e03b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4zkb\" (UID: \"5f384043-ef67-48e9-9cd3-aacd4928e03b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:11 crc kubenswrapper[4772]: I0124 03:43:11.977178 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podStartSLOduration=67.977151641 podStartE2EDuration="1m7.977151641s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.957280113 +0000 UTC m=+88.994370848" watchObservedRunningTime="2026-01-24 03:43:11.977151641 +0000 UTC m=+89.014242376" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.000175 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.000156176 podStartE2EDuration="1m9.000156176s" podCreationTimestamp="2026-01-24 03:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.999728004 +0000 UTC m=+89.036818739" watchObservedRunningTime="2026-01-24 03:43:12.000156176 +0000 UTC m=+89.037246901" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.000439 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ljl6f" podStartSLOduration=68.000432704 podStartE2EDuration="1m8.000432704s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:11.977981164 +0000 UTC m=+89.015071909" watchObservedRunningTime="2026-01-24 03:43:12.000432704 +0000 UTC m=+89.037523429" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.024585 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.024547631 podStartE2EDuration="1m7.024547631s" podCreationTimestamp="2026-01-24 03:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:12.023504891 +0000 UTC m=+89.060595626" watchObservedRunningTime="2026-01-24 03:43:12.024547631 +0000 UTC m=+89.061638396" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.060101 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=33.060075907 podStartE2EDuration="33.060075907s" podCreationTimestamp="2026-01-24 03:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:12.04127636 +0000 UTC m=+89.078367095" watchObservedRunningTime="2026-01-24 03:43:12.060075907 +0000 UTC m=+89.097166652" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.130429 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" Jan 24 03:43:12 crc kubenswrapper[4772]: W0124 03:43:12.143253 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f384043_ef67_48e9_9cd3_aacd4928e03b.slice/crio-01557ed5f422aa62744e276730f07093af6ff845d2687fa0d5857037107a5ad2 WatchSource:0}: Error finding container 01557ed5f422aa62744e276730f07093af6ff845d2687fa0d5857037107a5ad2: Status 404 returned error can't find the container with id 01557ed5f422aa62744e276730f07093af6ff845d2687fa0d5857037107a5ad2 Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.200923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" event={"ID":"5f384043-ef67-48e9-9cd3-aacd4928e03b","Type":"ContainerStarted","Data":"01557ed5f422aa62744e276730f07093af6ff845d2687fa0d5857037107a5ad2"} Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.649635 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:32:51.278654058 +0000 UTC Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.649757 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.658549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:12 crc kubenswrapper[4772]: E0124 03:43:12.658680 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:12 crc kubenswrapper[4772]: I0124 03:43:12.659273 4772 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 03:43:13 crc kubenswrapper[4772]: I0124 03:43:13.206517 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" event={"ID":"5f384043-ef67-48e9-9cd3-aacd4928e03b","Type":"ContainerStarted","Data":"9ba8cb790f2f9418c1235cc973efd3d8be03b60a05903c170594920fd27fb53d"} Jan 24 03:43:13 crc kubenswrapper[4772]: I0124 03:43:13.230554 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4zkb" podStartSLOduration=69.230524801 podStartE2EDuration="1m9.230524801s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:13.229653127 +0000 UTC m=+90.266743882" watchObservedRunningTime="2026-01-24 03:43:13.230524801 +0000 UTC m=+90.267615566" Jan 24 03:43:13 crc kubenswrapper[4772]: I0124 03:43:13.658059 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:13 crc kubenswrapper[4772]: I0124 03:43:13.658150 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:13 crc kubenswrapper[4772]: I0124 03:43:13.658081 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:13 crc kubenswrapper[4772]: E0124 03:43:13.659969 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:13 crc kubenswrapper[4772]: E0124 03:43:13.660091 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:13 crc kubenswrapper[4772]: E0124 03:43:13.660185 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:14 crc kubenswrapper[4772]: I0124 03:43:14.658433 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:14 crc kubenswrapper[4772]: E0124 03:43:14.658990 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:15 crc kubenswrapper[4772]: I0124 03:43:15.658092 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:15 crc kubenswrapper[4772]: I0124 03:43:15.658145 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:15 crc kubenswrapper[4772]: E0124 03:43:15.658312 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:15 crc kubenswrapper[4772]: E0124 03:43:15.658497 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:15 crc kubenswrapper[4772]: I0124 03:43:15.658832 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:15 crc kubenswrapper[4772]: E0124 03:43:15.658975 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:16 crc kubenswrapper[4772]: I0124 03:43:16.658293 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:16 crc kubenswrapper[4772]: E0124 03:43:16.658478 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:17 crc kubenswrapper[4772]: I0124 03:43:17.658062 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:17 crc kubenswrapper[4772]: I0124 03:43:17.658312 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:17 crc kubenswrapper[4772]: I0124 03:43:17.658392 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:17 crc kubenswrapper[4772]: E0124 03:43:17.658558 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:17 crc kubenswrapper[4772]: E0124 03:43:17.658945 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:17 crc kubenswrapper[4772]: E0124 03:43:17.659086 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:17 crc kubenswrapper[4772]: I0124 03:43:17.679828 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 24 03:43:18 crc kubenswrapper[4772]: I0124 03:43:18.658697 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:18 crc kubenswrapper[4772]: E0124 03:43:18.658915 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:19 crc kubenswrapper[4772]: I0124 03:43:19.659986 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:19 crc kubenswrapper[4772]: I0124 03:43:19.660217 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:19 crc kubenswrapper[4772]: E0124 03:43:19.660571 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:19 crc kubenswrapper[4772]: E0124 03:43:19.660871 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:19 crc kubenswrapper[4772]: I0124 03:43:19.660988 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:19 crc kubenswrapper[4772]: E0124 03:43:19.661114 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:20 crc kubenswrapper[4772]: I0124 03:43:20.658564 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:20 crc kubenswrapper[4772]: E0124 03:43:20.658879 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:21 crc kubenswrapper[4772]: I0124 03:43:21.657959 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:21 crc kubenswrapper[4772]: E0124 03:43:21.658112 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:21 crc kubenswrapper[4772]: I0124 03:43:21.658145 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:21 crc kubenswrapper[4772]: I0124 03:43:21.658182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:21 crc kubenswrapper[4772]: E0124 03:43:21.658608 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:21 crc kubenswrapper[4772]: E0124 03:43:21.659015 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:21 crc kubenswrapper[4772]: I0124 03:43:21.682275 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 24 03:43:22 crc kubenswrapper[4772]: I0124 03:43:22.658058 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:22 crc kubenswrapper[4772]: E0124 03:43:22.658297 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:22 crc kubenswrapper[4772]: I0124 03:43:22.956371 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:22 crc kubenswrapper[4772]: E0124 03:43:22.956618 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:43:22 crc kubenswrapper[4772]: E0124 03:43:22.956728 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs podName:e8311b11-97fe-4657-add7-66fd66adc69f nodeName:}" failed. No retries permitted until 2026-01-24 03:44:26.956696493 +0000 UTC m=+163.993787258 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs") pod "network-metrics-daemon-mpdb8" (UID: "e8311b11-97fe-4657-add7-66fd66adc69f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 03:43:23 crc kubenswrapper[4772]: I0124 03:43:23.658104 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:23 crc kubenswrapper[4772]: I0124 03:43:23.658199 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:23 crc kubenswrapper[4772]: E0124 03:43:23.660329 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:23 crc kubenswrapper[4772]: I0124 03:43:23.660459 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:23 crc kubenswrapper[4772]: E0124 03:43:23.660949 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:23 crc kubenswrapper[4772]: E0124 03:43:23.661106 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:23 crc kubenswrapper[4772]: I0124 03:43:23.680024 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.680001164 podStartE2EDuration="6.680001164s" podCreationTimestamp="2026-01-24 03:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:23.679300014 +0000 UTC m=+100.716390759" watchObservedRunningTime="2026-01-24 03:43:23.680001164 +0000 UTC m=+100.717091889" Jan 24 03:43:23 crc kubenswrapper[4772]: I0124 03:43:23.730113 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.730075999 podStartE2EDuration="2.730075999s" podCreationTimestamp="2026-01-24 03:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:23.728264558 +0000 UTC m=+100.765355313" watchObservedRunningTime="2026-01-24 03:43:23.730075999 +0000 UTC m=+100.767166764" Jan 24 03:43:24 crc kubenswrapper[4772]: I0124 03:43:24.658563 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:24 crc kubenswrapper[4772]: E0124 03:43:24.658807 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:24 crc kubenswrapper[4772]: I0124 03:43:24.659957 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:43:24 crc kubenswrapper[4772]: E0124 03:43:24.660721 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c46s_openshift-ovn-kubernetes(849e85f7-2aca-4f00-a9be-a5f40979ad26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" Jan 24 03:43:25 crc kubenswrapper[4772]: I0124 03:43:25.658916 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:25 crc kubenswrapper[4772]: I0124 03:43:25.659018 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:25 crc kubenswrapper[4772]: E0124 03:43:25.659077 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:25 crc kubenswrapper[4772]: E0124 03:43:25.659167 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:25 crc kubenswrapper[4772]: I0124 03:43:25.659955 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:25 crc kubenswrapper[4772]: E0124 03:43:25.660099 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:26 crc kubenswrapper[4772]: I0124 03:43:26.658541 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:26 crc kubenswrapper[4772]: E0124 03:43:26.658695 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:27 crc kubenswrapper[4772]: I0124 03:43:27.658458 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:27 crc kubenswrapper[4772]: I0124 03:43:27.658621 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:27 crc kubenswrapper[4772]: E0124 03:43:27.658859 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:27 crc kubenswrapper[4772]: E0124 03:43:27.659055 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:27 crc kubenswrapper[4772]: I0124 03:43:27.659145 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:27 crc kubenswrapper[4772]: E0124 03:43:27.659292 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:28 crc kubenswrapper[4772]: I0124 03:43:28.658487 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:28 crc kubenswrapper[4772]: E0124 03:43:28.658913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:29 crc kubenswrapper[4772]: I0124 03:43:29.658672 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:29 crc kubenswrapper[4772]: I0124 03:43:29.658754 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:29 crc kubenswrapper[4772]: E0124 03:43:29.659364 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:29 crc kubenswrapper[4772]: I0124 03:43:29.658783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:29 crc kubenswrapper[4772]: E0124 03:43:29.659490 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:29 crc kubenswrapper[4772]: E0124 03:43:29.660117 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:30 crc kubenswrapper[4772]: I0124 03:43:30.658520 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:30 crc kubenswrapper[4772]: E0124 03:43:30.659573 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:31 crc kubenswrapper[4772]: I0124 03:43:31.658139 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:31 crc kubenswrapper[4772]: I0124 03:43:31.658187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:31 crc kubenswrapper[4772]: I0124 03:43:31.658139 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:31 crc kubenswrapper[4772]: E0124 03:43:31.658286 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:31 crc kubenswrapper[4772]: E0124 03:43:31.658340 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:31 crc kubenswrapper[4772]: E0124 03:43:31.658393 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:32 crc kubenswrapper[4772]: I0124 03:43:32.658087 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:32 crc kubenswrapper[4772]: E0124 03:43:32.658549 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:33 crc kubenswrapper[4772]: I0124 03:43:33.659172 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:33 crc kubenswrapper[4772]: I0124 03:43:33.659274 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:33 crc kubenswrapper[4772]: I0124 03:43:33.659180 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:33 crc kubenswrapper[4772]: E0124 03:43:33.661118 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:33 crc kubenswrapper[4772]: E0124 03:43:33.661255 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:33 crc kubenswrapper[4772]: E0124 03:43:33.661506 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:34 crc kubenswrapper[4772]: I0124 03:43:34.658704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:34 crc kubenswrapper[4772]: E0124 03:43:34.659264 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:35 crc kubenswrapper[4772]: I0124 03:43:35.658352 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:35 crc kubenswrapper[4772]: I0124 03:43:35.658386 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:35 crc kubenswrapper[4772]: E0124 03:43:35.658554 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:35 crc kubenswrapper[4772]: I0124 03:43:35.658702 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:35 crc kubenswrapper[4772]: E0124 03:43:35.658913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:35 crc kubenswrapper[4772]: E0124 03:43:35.659096 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:36 crc kubenswrapper[4772]: I0124 03:43:36.658007 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:36 crc kubenswrapper[4772]: E0124 03:43:36.658234 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:37 crc kubenswrapper[4772]: I0124 03:43:37.658117 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:37 crc kubenswrapper[4772]: I0124 03:43:37.658214 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:37 crc kubenswrapper[4772]: E0124 03:43:37.658311 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:37 crc kubenswrapper[4772]: E0124 03:43:37.658449 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:37 crc kubenswrapper[4772]: I0124 03:43:37.658142 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:37 crc kubenswrapper[4772]: E0124 03:43:37.658570 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.308925 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/1.log" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.309485 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/0.log" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.309545 4772 generic.go:334] "Generic (PLEG): container finished" podID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" containerID="358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9" exitCode=1 Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.309576 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerDied","Data":"358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9"} Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.309612 4772 scope.go:117] "RemoveContainer" containerID="ddab3e9a911e56ac839cbd96aa0071373897522330bc50cc6f0a114d3e66d2b0" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.310652 4772 scope.go:117] "RemoveContainer" containerID="358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9" Jan 24 03:43:38 crc kubenswrapper[4772]: E0124 03:43:38.311122 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kqp8g_openshift-multus(3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d)\"" pod="openshift-multus/multus-kqp8g" podUID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.658414 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:38 crc kubenswrapper[4772]: E0124 03:43:38.658691 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:38 crc kubenswrapper[4772]: I0124 03:43:38.659975 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.314954 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/3.log" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.318121 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerStarted","Data":"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7"} Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.318499 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.319561 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/1.log" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.349509 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podStartSLOduration=95.349474433 podStartE2EDuration="1m35.349474433s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:43:39.348045603 +0000 UTC m=+116.385136328" watchObservedRunningTime="2026-01-24 03:43:39.349474433 +0000 UTC m=+116.386565198" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.644870 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mpdb8"] Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.645047 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:39 crc kubenswrapper[4772]: E0124 03:43:39.645150 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.658688 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.658812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:39 crc kubenswrapper[4772]: I0124 03:43:39.658938 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:39 crc kubenswrapper[4772]: E0124 03:43:39.658808 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:39 crc kubenswrapper[4772]: E0124 03:43:39.658996 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:39 crc kubenswrapper[4772]: E0124 03:43:39.659126 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:41 crc kubenswrapper[4772]: I0124 03:43:41.659040 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:41 crc kubenswrapper[4772]: E0124 03:43:41.659307 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:41 crc kubenswrapper[4772]: I0124 03:43:41.659473 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:41 crc kubenswrapper[4772]: I0124 03:43:41.659594 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:41 crc kubenswrapper[4772]: E0124 03:43:41.659675 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:41 crc kubenswrapper[4772]: E0124 03:43:41.659887 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:41 crc kubenswrapper[4772]: I0124 03:43:41.659945 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:41 crc kubenswrapper[4772]: E0124 03:43:41.660074 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.629732 4772 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 24 03:43:43 crc kubenswrapper[4772]: I0124 03:43:43.657907 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:43 crc kubenswrapper[4772]: I0124 03:43:43.658017 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:43 crc kubenswrapper[4772]: I0124 03:43:43.658017 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.659384 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:43 crc kubenswrapper[4772]: I0124 03:43:43.659549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.659676 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.660260 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.660439 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:43 crc kubenswrapper[4772]: E0124 03:43:43.777791 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 03:43:45 crc kubenswrapper[4772]: I0124 03:43:45.336598 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:43:45 crc kubenswrapper[4772]: I0124 03:43:45.658404 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:45 crc kubenswrapper[4772]: I0124 03:43:45.658437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:45 crc kubenswrapper[4772]: E0124 03:43:45.659168 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:45 crc kubenswrapper[4772]: I0124 03:43:45.658573 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:45 crc kubenswrapper[4772]: E0124 03:43:45.659309 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:45 crc kubenswrapper[4772]: E0124 03:43:45.659612 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:45 crc kubenswrapper[4772]: I0124 03:43:45.658525 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:45 crc kubenswrapper[4772]: E0124 03:43:45.660212 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:47 crc kubenswrapper[4772]: I0124 03:43:47.658705 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:47 crc kubenswrapper[4772]: I0124 03:43:47.658706 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:47 crc kubenswrapper[4772]: I0124 03:43:47.658873 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:47 crc kubenswrapper[4772]: I0124 03:43:47.658922 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:47 crc kubenswrapper[4772]: E0124 03:43:47.659091 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:47 crc kubenswrapper[4772]: E0124 03:43:47.659267 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:47 crc kubenswrapper[4772]: E0124 03:43:47.659405 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:47 crc kubenswrapper[4772]: E0124 03:43:47.659705 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:48 crc kubenswrapper[4772]: E0124 03:43:48.779399 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 03:43:49 crc kubenswrapper[4772]: I0124 03:43:49.658800 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:49 crc kubenswrapper[4772]: E0124 03:43:49.658999 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:49 crc kubenswrapper[4772]: I0124 03:43:49.659029 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:49 crc kubenswrapper[4772]: I0124 03:43:49.659035 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:49 crc kubenswrapper[4772]: I0124 03:43:49.659106 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:49 crc kubenswrapper[4772]: E0124 03:43:49.659543 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:49 crc kubenswrapper[4772]: E0124 03:43:49.659643 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:49 crc kubenswrapper[4772]: E0124 03:43:49.659858 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:49 crc kubenswrapper[4772]: I0124 03:43:49.659936 4772 scope.go:117] "RemoveContainer" containerID="358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9" Jan 24 03:43:50 crc kubenswrapper[4772]: I0124 03:43:50.368382 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/1.log" Jan 24 03:43:50 crc kubenswrapper[4772]: I0124 03:43:50.368482 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerStarted","Data":"4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed"} Jan 24 03:43:51 crc kubenswrapper[4772]: I0124 03:43:51.658519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:51 crc kubenswrapper[4772]: E0124 03:43:51.658999 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:51 crc kubenswrapper[4772]: I0124 03:43:51.658586 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:51 crc kubenswrapper[4772]: E0124 03:43:51.659130 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:51 crc kubenswrapper[4772]: I0124 03:43:51.658520 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:51 crc kubenswrapper[4772]: E0124 03:43:51.659193 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:51 crc kubenswrapper[4772]: I0124 03:43:51.658639 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:51 crc kubenswrapper[4772]: E0124 03:43:51.659254 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:53 crc kubenswrapper[4772]: I0124 03:43:53.657975 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:53 crc kubenswrapper[4772]: I0124 03:43:53.660850 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:53 crc kubenswrapper[4772]: E0124 03:43:53.660853 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 03:43:53 crc kubenswrapper[4772]: I0124 03:43:53.660911 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:53 crc kubenswrapper[4772]: I0124 03:43:53.660898 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:53 crc kubenswrapper[4772]: E0124 03:43:53.661015 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 03:43:53 crc kubenswrapper[4772]: E0124 03:43:53.661095 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 03:43:53 crc kubenswrapper[4772]: E0124 03:43:53.661164 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mpdb8" podUID="e8311b11-97fe-4657-add7-66fd66adc69f" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.659079 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.659167 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.659100 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.659357 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.663947 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.664026 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.664643 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.665613 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.666002 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 03:43:55 crc kubenswrapper[4772]: I0124 03:43:55.666210 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.557956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.657627 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l9vkq"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.675459 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.675793 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.676645 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g9ckr"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.677867 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8xg2"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.678306 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.678927 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.679727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.680010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.680169 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.680383 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.680590 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-86pjh"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.681873 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.681923 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.680855 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.682296 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.682364 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.682425 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.681983 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.684471 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sflxf"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.685317 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.685956 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.686452 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.687541 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.687776 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.686963 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.687084 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.687027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.688298 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.684907 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.690007 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwhc2"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.690631 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.735801 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.738789 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.739135 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.739433 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.739716 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.740081 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.740375 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.740648 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.740891 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.741334 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.741584 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.741701 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.742341 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.742501 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743042 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743238 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743384 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743441 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743470 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743631 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743880 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744006 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744085 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744189 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744228 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744249 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744283 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744296 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744478 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744722 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744787 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744838 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744907 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.744937 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745075 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.743632 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745280 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-99789"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745099 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745543 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745635 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745685 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745707 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745727 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745825 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745883 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.745970 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.746043 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.746333 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.746637 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.746749 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.747165 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.747532 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.748191 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.751886 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.752241 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.753054 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.754345 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.754539 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.755591 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.756940 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.757465 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.761983 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.780154 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.780537 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.780764 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.781064 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lpsdw"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.781299 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.781327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.796517 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mk8n7"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.797883 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.798320 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jj66"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.799013 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.799757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.800056 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.800101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.800812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.801250 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.821271 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.821795 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.823785 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.824593 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.827060 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.827542 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.827899 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828245 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828375 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828429 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828538 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828552 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828616 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828727 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.828829 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.829092 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.829324 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.829488 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.829582 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.829994 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830125 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830183 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830271 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830434 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830771 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830863 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.830942 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831029 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831113 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831210 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831304 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831371 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831777 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831788 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.831981 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.832280 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.832638 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.832751 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.833568 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.834221 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.836029 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.836841 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.836858 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.839860 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.839913 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840032 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840070 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840140 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840159 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840265 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840325 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840374 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840438 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840486 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840585 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840684 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.840917 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.841105 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.846387 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.849221 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ktgts"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.849809 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.850083 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g767c"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.850526 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.850916 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.851444 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.851475 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.854471 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.854864 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9j994"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.855376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.855621 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.862995 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.864988 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.863834 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.868357 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l9vkq"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.868374 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-86pjh"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.868389 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sflxf"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.868400 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8xg2"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.868909 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.878871 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.879702 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cn7fb"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.879713 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.880187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.897552 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g9ckr"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.898960 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.901767 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.910143 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.910186 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-szx8j"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.910989 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.911063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.912382 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.914492 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.919910 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.924994 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jj66"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.926523 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mk8n7"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.929235 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.930755 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.932341 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.933777 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9j994"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.934821 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.937437 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.938917 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.939074 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.940223 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.941358 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwhc2"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.942570 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mq527"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.943528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mq527" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.943783 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lfvrg"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.944201 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.944706 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-99789"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.945749 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.946679 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.948252 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lpsdw"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.949333 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.950332 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.951324 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mq527"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.952465 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.953633 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.954674 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.955654 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.956697 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.957775 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lfvrg"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.958612 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.958953 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g767c"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.959729 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-szx8j"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.960767 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm"] Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.978505 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 03:44:02 crc kubenswrapper[4772]: I0124 03:44:02.998315 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.018529 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.039180 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.059846 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.079931 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.099035 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.119412 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.140366 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.162584 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.179846 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.199778 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.219995 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.252425 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.259148 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.279437 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.309697 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.330116 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.338848 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.359872 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.378974 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.399436 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.420170 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.438882 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.460178 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.479844 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.499477 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.524926 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.539589 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.559865 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.580267 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.600715 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648033 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9998l\" (UniqueName: \"kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648129 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6102d3-cae6-4cfe-b951-68c5f36eef94-serving-cert\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648274 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648325 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648363 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnrr\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648394 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794mh\" (UniqueName: \"kubernetes.io/projected/58f2c7e6-4613-44d2-a060-2abdf10e01a2-kube-api-access-794mh\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648436 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kqk\" (UniqueName: \"kubernetes.io/projected/6c6102d3-cae6-4cfe-b951-68c5f36eef94-kube-api-access-p9kqk\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648458 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-service-ca\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648481 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648506 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c289b5b-107c-4fce-8774-93bdf5001627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74ba3508-b86b-4ab5-8a85-91dddde0df79-machine-approver-tls\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648552 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-oauth-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-auth-proxy-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648886 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648926 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.648962 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit-dir\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.649130 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.149069488 +0000 UTC m=+141.186160413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649238 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649314 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649383 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8bnl\" (UniqueName: \"kubernetes.io/projected/26bbc842-3687-4b32-8530-6ae150b7126b-kube-api-access-c8bnl\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649581 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-config\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649659 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-service-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649887 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649918 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-oauth-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649946 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.649988 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650125 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwwkv\" (UniqueName: \"kubernetes.io/projected/9c289b5b-107c-4fce-8774-93bdf5001627-kube-api-access-qwwkv\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642240f6-95c9-4447-a5d0-d6c700550863-config\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650195 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642240f6-95c9-4447-a5d0-d6c700550863-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjn9r\" (UniqueName: \"kubernetes.io/projected/74ba3508-b86b-4ab5-8a85-91dddde0df79-kube-api-access-vjn9r\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650330 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7nq\" (UniqueName: \"kubernetes.io/projected/fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610-kube-api-access-nc7nq\") pod \"downloads-7954f5f757-86pjh\" (UID: \"fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610\") " pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-config\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-trusted-ca-bundle\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650480 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650524 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-encryption-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650570 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b407e4f-8708-456d-8e26-334da5ec43e7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650628 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b407e4f-8708-456d-8e26-334da5ec43e7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8lxr\" (UniqueName: \"kubernetes.io/projected/3b407e4f-8708-456d-8e26-334da5ec43e7-kube-api-access-d8lxr\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650914 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26bbc842-3687-4b32-8530-6ae150b7126b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc81cbc1-3a36-4bbc-8b30-198766877216-metrics-tls\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.650987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-client\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-proxy-tls\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651057 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651124 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmr5d\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-kube-api-access-mmr5d\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651208 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-trusted-ca\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651269 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2v8c\" (UniqueName: \"kubernetes.io/projected/8326b678-6225-4d2b-bec0-7486245510cb-kube-api-access-t2v8c\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651385 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7rf\" (UniqueName: \"kubernetes.io/projected/c54dc1be-1a2d-433d-bb84-a274bdd4365b-kube-api-access-sc7rf\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651430 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651490 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6npb\" (UniqueName: \"kubernetes.io/projected/fc81cbc1-3a36-4bbc-8b30-198766877216-kube-api-access-l6npb\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651529 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-node-pullsecrets\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651569 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-image-import-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651609 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651671 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-serving-cert\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651836 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-kube-api-access-hsvmz\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651896 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642240f6-95c9-4447-a5d0-d6c700550863-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.651953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.652005 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.652066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8326b678-6225-4d2b-bec0-7486245510cb-serving-cert\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.662164 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.679847 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.699551 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.720111 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.739841 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.753367 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.753525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.753554 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.253515668 +0000 UTC m=+141.290606433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.753611 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.753671 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-config\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.753730 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754026 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754114 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/735b86db-4d0b-4f1e-bfef-74f16fffa13d-proxy-tls\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/249748cf-cf33-40d6-bbe4-74af2c05395c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74ba3508-b86b-4ab5-8a85-91dddde0df79-machine-approver-tls\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754568 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754608 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-profile-collector-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754654 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-auth-proxy-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754689 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-service-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754838 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62148cbe-9135-4627-8b05-05f8f4465d20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754881 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-config\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit-dir\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754987 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-serving-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.754966 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f342eb-5009-4dcc-a013-426ea46c1959-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk6t\" (UniqueName: \"kubernetes.io/projected/62148cbe-9135-4627-8b05-05f8f4465d20-kube-api-access-cxk6t\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit-dir\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755134 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755309 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69494268-9917-4f39-a8d8-0a73e898ea6e-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755439 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-encryption-config\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-config\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-images\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755670 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-policies\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755770 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755838 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-node-bootstrap-token\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755871 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbzt\" (UniqueName: \"kubernetes.io/projected/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-kube-api-access-rcbzt\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcswz\" (UniqueName: \"kubernetes.io/projected/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-kube-api-access-mcswz\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755935 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwwkv\" (UniqueName: \"kubernetes.io/projected/9c289b5b-107c-4fce-8774-93bdf5001627-kube-api-access-qwwkv\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.755989 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756023 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57n7\" (UniqueName: \"kubernetes.io/projected/e6f342eb-5009-4dcc-a013-426ea46c1959-kube-api-access-s57n7\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756060 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642240f6-95c9-4447-a5d0-d6c700550863-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756097 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756209 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-cabundle\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756223 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-auth-proxy-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-trusted-ca-bundle\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-webhook-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6p2\" (UniqueName: \"kubernetes.io/projected/4074e9d5-7b79-41fe-9b4c-932a4bd47883-kube-api-access-gw6p2\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756599 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-config\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756648 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756693 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-mountpoint-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-encryption-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b407e4f-8708-456d-8e26-334da5ec43e7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756923 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.756974 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-serving-cert\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757012 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-plugins-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757058 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26bbc842-3687-4b32-8530-6ae150b7126b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757104 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-key\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757151 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b407e4f-8708-456d-8e26-334da5ec43e7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8lxr\" (UniqueName: \"kubernetes.io/projected/3b407e4f-8708-456d-8e26-334da5ec43e7-kube-api-access-d8lxr\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757290 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-proxy-tls\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2v8c\" (UniqueName: \"kubernetes.io/projected/8326b678-6225-4d2b-bec0-7486245510cb-kube-api-access-t2v8c\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757367 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7rf\" (UniqueName: \"kubernetes.io/projected/c54dc1be-1a2d-433d-bb84-a274bdd4365b-kube-api-access-sc7rf\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757480 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/249748cf-cf33-40d6-bbe4-74af2c05395c-config\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757548 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c951b6b-540d-4a3d-aeae-33ad49519b13-tmpfs\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757585 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rzp\" (UniqueName: \"kubernetes.io/projected/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-kube-api-access-v8rzp\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-config\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757625 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-registration-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757643 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.757961 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-trusted-ca-bundle\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758128 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6npb\" (UniqueName: \"kubernetes.io/projected/fc81cbc1-3a36-4bbc-8b30-198766877216-kube-api-access-l6npb\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-node-pullsecrets\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758218 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-image-import-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758380 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758446 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/58f2c7e6-4613-44d2-a060-2abdf10e01a2-node-pullsecrets\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.758980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-serving-cert\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759050 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-config\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8x7p\" (UniqueName: \"kubernetes.io/projected/427804d2-581e-4a93-ac0b-3e98b25182fc-kube-api-access-k8x7p\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759277 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759314 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f98b89d5-0baf-4892-8d9b-44e64a3d793b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759367 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9d3510-946e-49b9-bd37-aa49de76ee43-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-kube-api-access-hsvmz\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759444 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759452 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b407e4f-8708-456d-8e26-334da5ec43e7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759515 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8326b678-6225-4d2b-bec0-7486245510cb-serving-cert\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759611 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6102d3-cae6-4cfe-b951-68c5f36eef94-serving-cert\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759649 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgm9\" (UniqueName: \"kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759681 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/427804d2-581e-4a93-ac0b-3e98b25182fc-cert\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759735 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9998l\" (UniqueName: \"kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759841 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-config-volume\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.759864 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-config\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.761097 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-image-import-ca\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.761411 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.761689 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762257 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762326 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74ba3508-b86b-4ab5-8a85-91dddde0df79-machine-approver-tls\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762774 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vkd\" (UniqueName: \"kubernetes.io/projected/d7498c90-00fc-4024-8509-c135ff6ce906-kube-api-access-t2vkd\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-srv-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762897 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.762956 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnrr\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763121 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-794mh\" (UniqueName: \"kubernetes.io/projected/58f2c7e6-4613-44d2-a060-2abdf10e01a2-kube-api-access-794mh\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763172 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvhs\" (UniqueName: \"kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763211 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wn7\" (UniqueName: \"kubernetes.io/projected/55c2640f-1744-4c8e-a11c-85efa6215c47-kube-api-access-z2wn7\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kqk\" (UniqueName: \"kubernetes.io/projected/6c6102d3-cae6-4cfe-b951-68c5f36eef94-kube-api-access-p9kqk\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763332 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-service-ca\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763367 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c289b5b-107c-4fce-8774-93bdf5001627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763402 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-oauth-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763437 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f342eb-5009-4dcc-a013-426ea46c1959-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763451 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763470 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-metrics-tls\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763519 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763643 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck89j\" (UniqueName: \"kubernetes.io/projected/f98b89d5-0baf-4892-8d9b-44e64a3d793b-kube-api-access-ck89j\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763680 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763776 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763811 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763853 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-serving-cert\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763911 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763946 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.763988 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764021 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-default-certificate\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764053 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-socket-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764094 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764130 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8bnl\" (UniqueName: \"kubernetes.io/projected/26bbc842-3687-4b32-8530-6ae150b7126b-kube-api-access-c8bnl\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnm2r\" (UniqueName: \"kubernetes.io/projected/69494268-9917-4f39-a8d8-0a73e898ea6e-kube-api-access-fnm2r\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764204 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7s4r\" (UniqueName: \"kubernetes.io/projected/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-kube-api-access-t7s4r\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764237 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmgx\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-kube-api-access-gwmgx\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764276 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764323 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-certs\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764354 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-srv-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764392 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-service-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764514 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b407e4f-8708-456d-8e26-334da5ec43e7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764523 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-images\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764613 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-oauth-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-serving-cert\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c6102d3-cae6-4cfe-b951-68c5f36eef94-serving-cert\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-dir\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764810 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642240f6-95c9-4447-a5d0-d6c700550863-config\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764907 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764972 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7nq\" (UniqueName: \"kubernetes.io/projected/fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610-kube-api-access-nc7nq\") pod \"downloads-7954f5f757-86pjh\" (UID: \"fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610\") " pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.764999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8326b678-6225-4d2b-bec0-7486245510cb-serving-cert\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765022 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjn9r\" (UniqueName: \"kubernetes.io/projected/74ba3508-b86b-4ab5-8a85-91dddde0df79-kube-api-access-vjn9r\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765109 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-service-ca\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.765502 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.265472653 +0000 UTC m=+141.302563408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765790 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c54dc1be-1a2d-433d-bb84-a274bdd4365b-oauth-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765867 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642240f6-95c9-4447-a5d0-d6c700550863-config\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765884 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.765954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-client\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766106 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-audit\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766393 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-metrics-certs\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9ng\" (UniqueName: \"kubernetes.io/projected/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-kube-api-access-4d9ng\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc81cbc1-3a36-4bbc-8b30-198766877216-metrics-tls\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766527 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766548 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7498c90-00fc-4024-8509-c135ff6ce906-service-ca-bundle\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmr5d\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-kube-api-access-mmr5d\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766691 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-trusted-ca\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766714 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/69494268-9917-4f39-a8d8-0a73e898ea6e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766749 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-stats-auth\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766691 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6102d3-cae6-4cfe-b951-68c5f36eef94-service-ca-bundle\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-csi-data-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766959 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-client\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.766999 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.767022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ckl\" (UniqueName: \"kubernetes.io/projected/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-kube-api-access-x8ckl\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.767044 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/249748cf-cf33-40d6-bbe4-74af2c05395c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.768043 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58f2c7e6-4613-44d2-a060-2abdf10e01a2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.768097 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.768375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c289b5b-107c-4fce-8774-93bdf5001627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.769302 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26bbc842-3687-4b32-8530-6ae150b7126b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.769344 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-etcd-client\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.770550 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-encryption-config\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.770582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f2c7e6-4613-44d2-a060-2abdf10e01a2-serving-cert\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.770881 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8326b678-6225-4d2b-bec0-7486245510cb-trusted-ca\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.771068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-serving-cert\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.771160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.771681 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.772198 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.772496 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc81cbc1-3a36-4bbc-8b30-198766877216-metrics-tls\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.773272 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c54dc1be-1a2d-433d-bb84-a274bdd4365b-console-oauth-config\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.774501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-metrics-tls\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.774570 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.774879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ljx\" (UniqueName: \"kubernetes.io/projected/f6d4a349-9797-4909-b0fe-eca12c6f7435-kube-api-access-88ljx\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.774945 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9d3510-946e-49b9-bd37-aa49de76ee43-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.774992 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-client\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775091 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv798\" (UniqueName: \"kubernetes.io/projected/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-kube-api-access-dv798\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775236 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v4z\" (UniqueName: \"kubernetes.io/projected/f22ad02a-18c0-4c78-ba2e-424e8431ecb0-kube-api-access-x5v4z\") pod \"migrator-59844c95c7-9h8qp\" (UID: \"f22ad02a-18c0-4c78-ba2e-424e8431ecb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775347 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775371 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr79x\" (UniqueName: \"kubernetes.io/projected/0f9d3510-946e-49b9-bd37-aa49de76ee43-kube-api-access-hr79x\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775445 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642240f6-95c9-4447-a5d0-d6c700550863-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775445 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-proxy-tls\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775470 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwm2c\" (UniqueName: \"kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775591 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775671 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rddvs\" (UniqueName: \"kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775711 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hlk7\" (UniqueName: \"kubernetes.io/projected/735b86db-4d0b-4f1e-bfef-74f16fffa13d-kube-api-access-9hlk7\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775707 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ba3508-b86b-4ab5-8a85-91dddde0df79-config\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.775877 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddhk\" (UniqueName: \"kubernetes.io/projected/3c951b6b-540d-4a3d-aeae-33ad49519b13-kube-api-access-kddhk\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.780407 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.781374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642240f6-95c9-4447-a5d0-d6c700550863-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.782026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-trusted-ca\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.791329 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.799857 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.820047 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.838041 4772 request.go:700] Waited for 1.000654841s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0 Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.840600 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.860002 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878116 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.878385 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.37834847 +0000 UTC m=+141.415439245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878558 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-config\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/735b86db-4d0b-4f1e-bfef-74f16fffa13d-proxy-tls\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878661 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/249748cf-cf33-40d6-bbe4-74af2c05395c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-service-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-profile-collector-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878873 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62148cbe-9135-4627-8b05-05f8f4465d20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f342eb-5009-4dcc-a013-426ea46c1959-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.878980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxk6t\" (UniqueName: \"kubernetes.io/projected/62148cbe-9135-4627-8b05-05f8f4465d20-kube-api-access-cxk6t\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879029 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-config\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879101 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879153 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69494268-9917-4f39-a8d8-0a73e898ea6e-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879205 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879252 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-encryption-config\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-images\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879386 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-policies\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879448 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879501 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s57n7\" (UniqueName: \"kubernetes.io/projected/e6f342eb-5009-4dcc-a013-426ea46c1959-kube-api-access-s57n7\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-node-bootstrap-token\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879600 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbzt\" (UniqueName: \"kubernetes.io/projected/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-kube-api-access-rcbzt\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcswz\" (UniqueName: \"kubernetes.io/projected/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-kube-api-access-mcswz\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879716 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.879848 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.880090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.880518 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.880926 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-cabundle\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.880976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-images\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-webhook-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw6p2\" (UniqueName: \"kubernetes.io/projected/4074e9d5-7b79-41fe-9b4c-932a4bd47883-kube-api-access-gw6p2\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881146 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-mountpoint-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881206 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881260 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881299 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62148cbe-9135-4627-8b05-05f8f4465d20-config\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.881915 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882365 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-mountpoint-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882489 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-serving-cert\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882551 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-plugins-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-key\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882701 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882839 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/249748cf-cf33-40d6-bbe4-74af2c05395c-config\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882923 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.882976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c951b6b-540d-4a3d-aeae-33ad49519b13-tmpfs\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883029 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rzp\" (UniqueName: \"kubernetes.io/projected/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-kube-api-access-v8rzp\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883078 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-registration-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-config\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8x7p\" (UniqueName: \"kubernetes.io/projected/427804d2-581e-4a93-ac0b-3e98b25182fc-kube-api-access-k8x7p\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883299 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883401 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f98b89d5-0baf-4892-8d9b-44e64a3d793b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9d3510-946e-49b9-bd37-aa49de76ee43-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883469 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-plugins-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgm9\" (UniqueName: \"kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/427804d2-581e-4a93-ac0b-3e98b25182fc-cert\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-config-volume\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883709 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vkd\" (UniqueName: \"kubernetes.io/projected/d7498c90-00fc-4024-8509-c135ff6ce906-kube-api-access-t2vkd\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883770 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-srv-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883835 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883878 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfvhs\" (UniqueName: \"kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883911 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wn7\" (UniqueName: \"kubernetes.io/projected/55c2640f-1744-4c8e-a11c-85efa6215c47-kube-api-access-z2wn7\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883946 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f342eb-5009-4dcc-a013-426ea46c1959-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.883976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-metrics-tls\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884058 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-registration-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884296 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/735b86db-4d0b-4f1e-bfef-74f16fffa13d-proxy-tls\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884356 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884466 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-config\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884467 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884553 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884581 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck89j\" (UniqueName: \"kubernetes.io/projected/f98b89d5-0baf-4892-8d9b-44e64a3d793b-kube-api-access-ck89j\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884625 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-serving-cert\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-default-certificate\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884782 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-socket-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnm2r\" (UniqueName: \"kubernetes.io/projected/69494268-9917-4f39-a8d8-0a73e898ea6e-kube-api-access-fnm2r\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884894 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7s4r\" (UniqueName: \"kubernetes.io/projected/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-kube-api-access-t7s4r\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884923 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmgx\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-kube-api-access-gwmgx\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-certs\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.884984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-srv-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885022 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-images\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-dir\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885091 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-serving-cert\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885120 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885151 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885212 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-metrics-certs\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885289 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-client\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885320 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7498c90-00fc-4024-8509-c135ff6ce906-service-ca-bundle\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885346 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9ng\" (UniqueName: \"kubernetes.io/projected/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-kube-api-access-4d9ng\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885370 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-csi-data-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/69494268-9917-4f39-a8d8-0a73e898ea6e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885436 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-stats-auth\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ckl\" (UniqueName: \"kubernetes.io/projected/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-kube-api-access-x8ckl\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885490 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/249748cf-cf33-40d6-bbe4-74af2c05395c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885515 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ljx\" (UniqueName: \"kubernetes.io/projected/f6d4a349-9797-4909-b0fe-eca12c6f7435-kube-api-access-88ljx\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv798\" (UniqueName: \"kubernetes.io/projected/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-kube-api-access-dv798\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885574 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9d3510-946e-49b9-bd37-aa49de76ee43-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885599 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-client\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885627 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885634 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5v4z\" (UniqueName: \"kubernetes.io/projected/f22ad02a-18c0-4c78-ba2e-424e8431ecb0-kube-api-access-x5v4z\") pod \"migrator-59844c95c7-9h8qp\" (UID: \"f22ad02a-18c0-4c78-ba2e-424e8431ecb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885781 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwm2c\" (UniqueName: \"kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr79x\" (UniqueName: \"kubernetes.io/projected/0f9d3510-946e-49b9-bd37-aa49de76ee43-kube-api-access-hr79x\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885894 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885958 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rddvs\" (UniqueName: \"kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.885982 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hlk7\" (UniqueName: \"kubernetes.io/projected/735b86db-4d0b-4f1e-bfef-74f16fffa13d-kube-api-access-9hlk7\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.886011 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddhk\" (UniqueName: \"kubernetes.io/projected/3c951b6b-540d-4a3d-aeae-33ad49519b13-kube-api-access-kddhk\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.886119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.886350 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.386321733 +0000 UTC m=+141.423412669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.886412 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f342eb-5009-4dcc-a013-426ea46c1959-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.886674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.886936 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/249748cf-cf33-40d6-bbe4-74af2c05395c-config\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.887045 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c951b6b-540d-4a3d-aeae-33ad49519b13-tmpfs\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.887460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-csi-data-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.887585 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9d3510-946e-49b9-bd37-aa49de76ee43-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.888147 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.888288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-socket-dir\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.889318 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/69494268-9917-4f39-a8d8-0a73e898ea6e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.890546 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-dir\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.891702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-images\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.891992 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.892184 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-encryption-config\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.893456 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f98b89d5-0baf-4892-8d9b-44e64a3d793b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.893474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f342eb-5009-4dcc-a013-426ea46c1959-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.893641 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.894668 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.894674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.895021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-serving-cert\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.895420 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.895610 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.895917 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.895965 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.896148 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69494268-9917-4f39-a8d8-0a73e898ea6e-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.896332 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.896359 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.896406 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/735b86db-4d0b-4f1e-bfef-74f16fffa13d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.896930 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.897801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.898240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-srv-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.898507 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.899250 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.899323 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.901010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.901025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62148cbe-9135-4627-8b05-05f8f4465d20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.902315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.902485 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9d3510-946e-49b9-bd37-aa49de76ee43-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.902824 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/249748cf-cf33-40d6-bbe4-74af2c05395c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.907460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-etcd-client\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.913273 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.914007 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4074e9d5-7b79-41fe-9b4c-932a4bd47883-profile-collector-cert\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.918794 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.930082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-serving-cert\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.940489 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.946356 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.959666 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.978782 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.987173 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.987304 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.487275225 +0000 UTC m=+141.524365960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.987618 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:03 crc kubenswrapper[4772]: E0124 03:44:03.988278 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.488248963 +0000 UTC m=+141.525339698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:03 crc kubenswrapper[4772]: I0124 03:44:03.999355 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.002319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-audit-policies\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.019330 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.039147 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.059193 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.063465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-default-certificate\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.079474 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.088718 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.089220 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.589191435 +0000 UTC m=+141.626282170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.089357 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.090070 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.590032708 +0000 UTC m=+141.627123463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.094167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-stats-auth\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.099754 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.108483 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7498c90-00fc-4024-8509-c135ff6ce906-metrics-certs\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.119775 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.139672 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.145237 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7498c90-00fc-4024-8509-c135ff6ce906-service-ca-bundle\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.159170 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.179562 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.191260 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.191459 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.691437603 +0000 UTC m=+141.728528328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.191663 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.191993 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.691985128 +0000 UTC m=+141.729075853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.199664 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.212959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-serving-cert\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.219172 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.225232 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-client\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.239462 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.242660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-config\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.259432 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.264775 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.279888 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.281964 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-etcd-service-ca\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.293599 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.293833 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.793801535 +0000 UTC m=+141.830892290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.294831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.295335 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.795319357 +0000 UTC m=+141.832410112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.301413 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.319859 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.339316 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.350037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.360894 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.379707 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.387663 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-webhook-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.387801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c951b6b-540d-4a3d-aeae-33ad49519b13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.396030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.396259 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.896230198 +0000 UTC m=+141.933320953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.396646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.397471 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.897435082 +0000 UTC m=+141.934525837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.400549 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.419219 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.432635 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-key\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.439966 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.442867 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6d4a349-9797-4909-b0fe-eca12c6f7435-signing-cabundle\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.460536 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.478965 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.489950 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/55c2640f-1744-4c8e-a11c-85efa6215c47-srv-cert\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.497777 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.497978 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.997943111 +0000 UTC m=+142.035033866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.499149 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.499200 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.499672 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:04.999637559 +0000 UTC m=+142.036728314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.504206 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-certs\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.519339 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.539367 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.548612 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-node-bootstrap-token\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.559032 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.579727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.599261 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.601044 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.601311 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.10127339 +0000 UTC m=+142.138364155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.601851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.602454 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.102428152 +0000 UTC m=+142.139518917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.619439 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.626350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-config-volume\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.639550 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.659602 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.671524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-metrics-tls\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.679453 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.699613 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.703417 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.703664 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.203607641 +0000 UTC m=+142.240698396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.704017 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.704523 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.204486325 +0000 UTC m=+142.241577090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.719560 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.729935 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/427804d2-581e-4a93-ac0b-3e98b25182fc-cert\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.739222 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.805824 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.806062 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.306008063 +0000 UTC m=+142.343098818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.807014 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.807893 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.307876836 +0000 UTC m=+142.344967591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.809674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwwkv\" (UniqueName: \"kubernetes.io/projected/9c289b5b-107c-4fce-8774-93bdf5001627-kube-api-access-qwwkv\") pod \"package-server-manager-789f6589d5-hd4qv\" (UID: \"9c289b5b-107c-4fce-8774-93bdf5001627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.826829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642240f6-95c9-4447-a5d0-d6c700550863-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6nbg6\" (UID: \"642240f6-95c9-4447-a5d0-d6c700550863\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.846026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8lxr\" (UniqueName: \"kubernetes.io/projected/3b407e4f-8708-456d-8e26-334da5ec43e7-kube-api-access-d8lxr\") pod \"openshift-controller-manager-operator-756b6f6bc6-p2fqb\" (UID: \"3b407e4f-8708-456d-8e26-334da5ec43e7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.857085 4772 request.go:700] Waited for 1.098124106s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.866268 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2v8c\" (UniqueName: \"kubernetes.io/projected/8326b678-6225-4d2b-bec0-7486245510cb-kube-api-access-t2v8c\") pod \"console-operator-58897d9998-d8xg2\" (UID: \"8326b678-6225-4d2b-bec0-7486245510cb\") " pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.869319 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.887564 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6npb\" (UniqueName: \"kubernetes.io/projected/fc81cbc1-3a36-4bbc-8b30-198766877216-kube-api-access-l6npb\") pod \"dns-operator-744455d44c-wwhc2\" (UID: \"fc81cbc1-3a36-4bbc-8b30-198766877216\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.911892 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.912531 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.412495311 +0000 UTC m=+142.449586076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.912769 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7rf\" (UniqueName: \"kubernetes.io/projected/c54dc1be-1a2d-433d-bb84-a274bdd4365b-kube-api-access-sc7rf\") pod \"console-f9d7485db-sflxf\" (UID: \"c54dc1be-1a2d-433d-bb84-a274bdd4365b\") " pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.913281 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:04 crc kubenswrapper[4772]: E0124 03:44:04.913972 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.413953021 +0000 UTC m=+142.451043776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.928607 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/6ada6c66-3eb9-4d60-9f0c-654ee96e06cb-kube-api-access-hsvmz\") pod \"machine-config-controller-84d6567774-wpvk5\" (UID: \"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.947407 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.954999 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.967796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.974527 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9998l\" (UniqueName: \"kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l\") pod \"route-controller-manager-6576b87f9c-z98q4\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:04 crc kubenswrapper[4772]: I0124 03:44:04.979991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnrr\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.008390 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.015017 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.016438 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.516416286 +0000 UTC m=+142.553507011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.020794 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.029400 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.034773 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-794mh\" (UniqueName: \"kubernetes.io/projected/58f2c7e6-4613-44d2-a060-2abdf10e01a2-kube-api-access-794mh\") pod \"apiserver-76f77b778f-g9ckr\" (UID: \"58f2c7e6-4613-44d2-a060-2abdf10e01a2\") " pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.040594 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.052000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kqk\" (UniqueName: \"kubernetes.io/projected/6c6102d3-cae6-4cfe-b951-68c5f36eef94-kube-api-access-p9kqk\") pod \"authentication-operator-69f744f599-l9vkq\" (UID: \"6c6102d3-cae6-4cfe-b951-68c5f36eef94\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.056590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.071519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7nq\" (UniqueName: \"kubernetes.io/projected/fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610-kube-api-access-nc7nq\") pod \"downloads-7954f5f757-86pjh\" (UID: \"fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610\") " pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.087400 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjn9r\" (UniqueName: \"kubernetes.io/projected/74ba3508-b86b-4ab5-8a85-91dddde0df79-kube-api-access-vjn9r\") pod \"machine-approver-56656f9798-xgt94\" (UID: \"74ba3508-b86b-4ab5-8a85-91dddde0df79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.101064 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8bnl\" (UniqueName: \"kubernetes.io/projected/26bbc842-3687-4b32-8530-6ae150b7126b-kube-api-access-c8bnl\") pod \"multus-admission-controller-857f4d67dd-99789\" (UID: \"26bbc842-3687-4b32-8530-6ae150b7126b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.117851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.118579 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.618560831 +0000 UTC m=+142.655651576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.122314 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ef7ecc6-52d7-4c08-984b-f4fab28494d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ttxxp\" (UID: \"4ef7ecc6-52d7-4c08-984b-f4fab28494d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.138450 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmr5d\" (UniqueName: \"kubernetes.io/projected/0a7e7f62-aefc-4d8c-a87e-acdb182280fd-kube-api-access-mmr5d\") pod \"ingress-operator-5b745b69d9-crfdt\" (UID: \"0a7e7f62-aefc-4d8c-a87e-acdb182280fd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.141774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.153228 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.155590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/249748cf-cf33-40d6-bbe4-74af2c05395c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8jvkg\" (UID: \"249748cf-cf33-40d6-bbe4-74af2c05395c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.163180 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.165249 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8xg2"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.173066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s57n7\" (UniqueName: \"kubernetes.io/projected/e6f342eb-5009-4dcc-a013-426ea46c1959-kube-api-access-s57n7\") pod \"kube-storage-version-migrator-operator-b67b599dd-9czt7\" (UID: \"e6f342eb-5009-4dcc-a013-426ea46c1959\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.179448 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.201323 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.218897 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.219369 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.719353028 +0000 UTC m=+142.756443753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.222110 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.223267 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbzt\" (UniqueName: \"kubernetes.io/projected/11840fdc-3cd8-4efe-bf63-68cfc5ab4f21-kube-api-access-rcbzt\") pod \"dns-default-mq527\" (UID: \"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21\") " pod="openshift-dns/dns-default-mq527" Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.229811 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ba3508_b86b_4ab5_8a85_91dddde0df79.slice/crio-a231c789cd3cd86f5dca46526590afb2cc1c191ac03c3212df06d7f19b11e625 WatchSource:0}: Error finding container a231c789cd3cd86f5dca46526590afb2cc1c191ac03c3212df06d7f19b11e625: Status 404 returned error can't find the container with id a231c789cd3cd86f5dca46526590afb2cc1c191ac03c3212df06d7f19b11e625 Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.231008 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sflxf"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.233938 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.236446 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcswz\" (UniqueName: \"kubernetes.io/projected/9d623682-cd3a-44e9-b8de-1b4134bbb2b6-kube-api-access-mcswz\") pod \"etcd-operator-b45778765-g767c\" (UID: \"9d623682-cd3a-44e9-b8de-1b4134bbb2b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.247494 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc54dc1be_1a2d_433d_bb84_a274bdd4365b.slice/crio-fc00c6af61e483bde722bc3becd4194651ff8bc176ebc5e36ec95862f191fb73 WatchSource:0}: Error finding container fc00c6af61e483bde722bc3becd4194651ff8bc176ebc5e36ec95862f191fb73: Status 404 returned error can't find the container with id fc00c6af61e483bde722bc3becd4194651ff8bc176ebc5e36ec95862f191fb73 Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.254411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxk6t\" (UniqueName: \"kubernetes.io/projected/62148cbe-9135-4627-8b05-05f8f4465d20-kube-api-access-cxk6t\") pod \"machine-api-operator-5694c8668f-lpsdw\" (UID: \"62148cbe-9135-4627-8b05-05f8f4465d20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.255430 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mq527" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.272871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgm9\" (UniqueName: \"kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9\") pod \"marketplace-operator-79b997595-8vwrd\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.302477 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfvhs\" (UniqueName: \"kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs\") pod \"collect-profiles-29487090-mlcjv\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.312705 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.323255 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.323648 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.823634683 +0000 UTC m=+142.860725408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.327929 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wn7\" (UniqueName: \"kubernetes.io/projected/55c2640f-1744-4c8e-a11c-85efa6215c47-kube-api-access-z2wn7\") pod \"olm-operator-6b444d44fb-zgm46\" (UID: \"55c2640f-1744-4c8e-a11c-85efa6215c47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.335945 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8x7p\" (UniqueName: \"kubernetes.io/projected/427804d2-581e-4a93-ac0b-3e98b25182fc-kube-api-access-k8x7p\") pod \"ingress-canary-lfvrg\" (UID: \"427804d2-581e-4a93-ac0b-3e98b25182fc\") " pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.345414 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.354575 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.363670 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vkd\" (UniqueName: \"kubernetes.io/projected/d7498c90-00fc-4024-8509-c135ff6ce906-kube-api-access-t2vkd\") pod \"router-default-5444994796-ktgts\" (UID: \"d7498c90-00fc-4024-8509-c135ff6ce906\") " pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.379956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5v4z\" (UniqueName: \"kubernetes.io/projected/f22ad02a-18c0-4c78-ba2e-424e8431ecb0-kube-api-access-x5v4z\") pod \"migrator-59844c95c7-9h8qp\" (UID: \"f22ad02a-18c0-4c78-ba2e-424e8431ecb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.387546 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.387747 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.398379 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddhk\" (UniqueName: \"kubernetes.io/projected/3c951b6b-540d-4a3d-aeae-33ad49519b13-kube-api-access-kddhk\") pod \"packageserver-d55dfcdfc-b2qsl\" (UID: \"3c951b6b-540d-4a3d-aeae-33ad49519b13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.412264 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod642240f6_95c9_4447_a5d0_d6c700550863.slice/crio-f5dc6735788f9e5d1c6a0f8e1e5517f911c27e9e63cf7c116b80f4e3f8d29cb8 WatchSource:0}: Error finding container f5dc6735788f9e5d1c6a0f8e1e5517f911c27e9e63cf7c116b80f4e3f8d29cb8: Status 404 returned error can't find the container with id f5dc6735788f9e5d1c6a0f8e1e5517f911c27e9e63cf7c116b80f4e3f8d29cb8 Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.415395 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.419677 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwm2c\" (UniqueName: \"kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c\") pod \"controller-manager-879f6c89f-mshdh\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.422080 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.424140 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.424290 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.924260836 +0000 UTC m=+142.961351561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.424652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.425055 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:05.925044998 +0000 UTC m=+142.962135723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.428633 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" event={"ID":"642240f6-95c9-4447-a5d0-d6c700550863","Type":"ContainerStarted","Data":"f5dc6735788f9e5d1c6a0f8e1e5517f911c27e9e63cf7c116b80f4e3f8d29cb8"} Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.434227 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.437413 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sflxf" event={"ID":"c54dc1be-1a2d-433d-bb84-a274bdd4365b","Type":"ContainerStarted","Data":"fc00c6af61e483bde722bc3becd4194651ff8bc176ebc5e36ec95862f191fb73"} Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.438627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" event={"ID":"74ba3508-b86b-4ab5-8a85-91dddde0df79","Type":"ContainerStarted","Data":"a231c789cd3cd86f5dca46526590afb2cc1c191ac03c3212df06d7f19b11e625"} Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.441472 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" event={"ID":"8326b678-6225-4d2b-bec0-7486245510cb","Type":"ContainerStarted","Data":"25f1d2d4e54714be59b8c92bfa2916c5bd87ff320121b033e2392f0b0704d6ea"} Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.441646 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rzp\" (UniqueName: \"kubernetes.io/projected/dfc35e52-c6b0-4011-89fa-a3b77f5ef706-kube-api-access-v8rzp\") pod \"apiserver-7bbb656c7d-78vrm\" (UID: \"dfc35e52-c6b0-4011-89fa-a3b77f5ef706\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.450563 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.458605 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw6p2\" (UniqueName: \"kubernetes.io/projected/4074e9d5-7b79-41fe-9b4c-932a4bd47883-kube-api-access-gw6p2\") pod \"catalog-operator-68c6474976-cm82q\" (UID: \"4074e9d5-7b79-41fe-9b4c-932a4bd47883\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.463202 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.471711 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.478446 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.483053 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rddvs\" (UniqueName: \"kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs\") pod \"oauth-openshift-558db77b4-mk8n7\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.488326 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b407e4f_8708_456d_8e26_334da5ec43e7.slice/crio-24414a273b6ff9c5c2268329aa21ee2a4d1e88b37fdb7627075dc66186cd7593 WatchSource:0}: Error finding container 24414a273b6ff9c5c2268329aa21ee2a4d1e88b37fdb7627075dc66186cd7593: Status 404 returned error can't find the container with id 24414a273b6ff9c5c2268329aa21ee2a4d1e88b37fdb7627075dc66186cd7593 Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.488559 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.495843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hlk7\" (UniqueName: \"kubernetes.io/projected/735b86db-4d0b-4f1e-bfef-74f16fffa13d-kube-api-access-9hlk7\") pod \"machine-config-operator-74547568cd-rvdjm\" (UID: \"735b86db-4d0b-4f1e-bfef-74f16fffa13d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.511415 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.515807 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.515835 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck89j\" (UniqueName: \"kubernetes.io/projected/f98b89d5-0baf-4892-8d9b-44e64a3d793b-kube-api-access-ck89j\") pod \"cluster-samples-operator-665b6dd947-c5k7g\" (UID: \"f98b89d5-0baf-4892-8d9b-44e64a3d793b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.526342 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.526793 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.026776962 +0000 UTC m=+143.063867687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.537766 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.542767 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ljx\" (UniqueName: \"kubernetes.io/projected/f6d4a349-9797-4909-b0fe-eca12c6f7435-kube-api-access-88ljx\") pod \"service-ca-9c57cc56f-9j994\" (UID: \"f6d4a349-9797-4909-b0fe-eca12c6f7435\") " pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.554430 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv798\" (UniqueName: \"kubernetes.io/projected/d1fa00f5-8e93-4be6-aaff-228ccd9584f4-kube-api-access-dv798\") pod \"service-ca-operator-777779d784-7n9pf\" (UID: \"d1fa00f5-8e93-4be6-aaff-228ccd9584f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.567313 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lfvrg" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.569366 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l9vkq"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.583488 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ckl\" (UniqueName: \"kubernetes.io/projected/bcaaefc4-1474-44e9-b64f-2e495e7f8cc5-kube-api-access-x8ckl\") pod \"csi-hostpathplugin-szx8j\" (UID: \"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5\") " pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.610724 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnm2r\" (UniqueName: \"kubernetes.io/projected/69494268-9917-4f39-a8d8-0a73e898ea6e-kube-api-access-fnm2r\") pod \"openshift-config-operator-7777fb866f-5jj66\" (UID: \"69494268-9917-4f39-a8d8-0a73e898ea6e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.616649 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7s4r\" (UniqueName: \"kubernetes.io/projected/22621d4a-b5a2-4e46-97b2-245d5a5e4c0f-kube-api-access-t7s4r\") pod \"machine-config-server-cn7fb\" (UID: \"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f\") " pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.628486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.628823 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.128811224 +0000 UTC m=+143.165901949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.637728 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.638446 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwhc2"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.644591 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmgx\" (UniqueName: \"kubernetes.io/projected/26a10b7a-8796-4116-8e3b-c8a3f4e06edb-kube-api-access-gwmgx\") pod \"cluster-image-registry-operator-dc59b4c8b-429g4\" (UID: \"26a10b7a-8796-4116-8e3b-c8a3f4e06edb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.657277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr79x\" (UniqueName: \"kubernetes.io/projected/0f9d3510-946e-49b9-bd37-aa49de76ee43-kube-api-access-hr79x\") pod \"openshift-apiserver-operator-796bbdcf4f-fcfgv\" (UID: \"0f9d3510-946e-49b9-bd37-aa49de76ee43\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.665448 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.672889 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.677112 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9ng\" (UniqueName: \"kubernetes.io/projected/e9bc8517-d223-42cf-9c6d-5e96dbc58e27-kube-api-access-4d9ng\") pod \"control-plane-machine-set-operator-78cbb6b69f-f88sl\" (UID: \"e9bc8517-d223-42cf-9c6d-5e96dbc58e27\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.679395 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.693566 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.700707 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.714257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.715684 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc81cbc1_3a36_4bbc_8b30_198766877216.slice/crio-c3686e1514d9d2f11579fbc2c9d056394b9e6e171fc72c0ca5f0e601932d8fb2 WatchSource:0}: Error finding container c3686e1514d9d2f11579fbc2c9d056394b9e6e171fc72c0ca5f0e601932d8fb2: Status 404 returned error can't find the container with id c3686e1514d9d2f11579fbc2c9d056394b9e6e171fc72c0ca5f0e601932d8fb2 Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.735019 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.736118 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.736222 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.236180356 +0000 UTC m=+143.273271081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.740927 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.755260 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.793896 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.800527 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9j994" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.811235 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g9ckr"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.812246 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.824230 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cn7fb" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.838318 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.838727 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.338713842 +0000 UTC m=+143.375804567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.848272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" Jan 24 03:44:05 crc kubenswrapper[4772]: I0124 03:44:05.939429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:05 crc kubenswrapper[4772]: E0124 03:44:05.939720 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.439704325 +0000 UTC m=+143.476795050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.943880 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbf79dca_3857_4554_9b1a_8b98d98c88ad.slice/crio-d73994ac0c90f015198de54cf143b8582c6b24bfbe1aea156d7034974f9bf11b WatchSource:0}: Error finding container d73994ac0c90f015198de54cf143b8582c6b24bfbe1aea156d7034974f9bf11b: Status 404 returned error can't find the container with id d73994ac0c90f015198de54cf143b8582c6b24bfbe1aea156d7034974f9bf11b Jan 24 03:44:05 crc kubenswrapper[4772]: W0124 03:44:05.953382 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f2c7e6_4613_44d2_a060_2abdf10e01a2.slice/crio-71d4aa9b33aebb492b6a63ed57ea356adaa8de785c24eea2045a2b060b7d5a88 WatchSource:0}: Error finding container 71d4aa9b33aebb492b6a63ed57ea356adaa8de785c24eea2045a2b060b7d5a88: Status 404 returned error can't find the container with id 71d4aa9b33aebb492b6a63ed57ea356adaa8de785c24eea2045a2b060b7d5a88 Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.040360 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.042472 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.542444808 +0000 UTC m=+143.579535533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.142610 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.143695 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.643671488 +0000 UTC m=+143.680762213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: W0124 03:44:06.164851 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22621d4a_b5a2_4e46_97b2_245d5a5e4c0f.slice/crio-5a5b12cf48a117b5b3472e47eb9ecc3f900e79ea223d6ad900ecb542399e16f0 WatchSource:0}: Error finding container 5a5b12cf48a117b5b3472e47eb9ecc3f900e79ea223d6ad900ecb542399e16f0: Status 404 returned error can't find the container with id 5a5b12cf48a117b5b3472e47eb9ecc3f900e79ea223d6ad900ecb542399e16f0 Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.207507 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lpsdw"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.212990 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mq527"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.219905 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-86pjh"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.229614 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.244611 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.245123 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.745103383 +0000 UTC m=+143.782194108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.250793 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.346450 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.346796 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.846779776 +0000 UTC m=+143.883870501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.453060 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.453884 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:06.95386808 +0000 UTC m=+143.990958805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.504045 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sflxf" event={"ID":"c54dc1be-1a2d-433d-bb84-a274bdd4365b","Type":"ContainerStarted","Data":"9db02f942caee8ebcbee83f9953896bac12e13637f6a089fe14db6f3826da50f"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.531208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mq527" event={"ID":"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21","Type":"ContainerStarted","Data":"89e3788cfc1c8e73e0494b4135c4d535090bc51c94cb213094fcf8dfb4746372"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.546928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" event={"ID":"9c289b5b-107c-4fce-8774-93bdf5001627","Type":"ContainerStarted","Data":"d25af23b821332fbcba9e0fb7f844db8404e44a3f31c5423df3461997aad90f2"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.546994 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" event={"ID":"9c289b5b-107c-4fce-8774-93bdf5001627","Type":"ContainerStarted","Data":"80664c55eeec577bbc05d609dbe0c647db8c03ee82ab17d3fba92cfccae1560e"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.555189 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.556789 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.056712765 +0000 UTC m=+144.093803490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.562985 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" event={"ID":"3b407e4f-8708-456d-8e26-334da5ec43e7","Type":"ContainerStarted","Data":"20503c08559cf6722a0f1f9c315fc20856f4347b5a3189012c768ad608b22982"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.563043 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" event={"ID":"3b407e4f-8708-456d-8e26-334da5ec43e7","Type":"ContainerStarted","Data":"24414a273b6ff9c5c2268329aa21ee2a4d1e88b37fdb7627075dc66186cd7593"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.568134 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" event={"ID":"6c6102d3-cae6-4cfe-b951-68c5f36eef94","Type":"ContainerStarted","Data":"1e9250d567034f39cb6e5784c8672841d07995909f886078093f655e90ad0e32"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.568180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" event={"ID":"6c6102d3-cae6-4cfe-b951-68c5f36eef94","Type":"ContainerStarted","Data":"c19dcf9fc65c69393f51e73cda03ddd26b217ea516aa342d178ccb47b164cec3"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.570468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" event={"ID":"fbf79dca-3857-4554-9b1a-8b98d98c88ad","Type":"ContainerStarted","Data":"d73994ac0c90f015198de54cf143b8582c6b24bfbe1aea156d7034974f9bf11b"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.573034 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" event={"ID":"58f2c7e6-4613-44d2-a060-2abdf10e01a2","Type":"ContainerStarted","Data":"71d4aa9b33aebb492b6a63ed57ea356adaa8de785c24eea2045a2b060b7d5a88"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.576569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" event={"ID":"74ba3508-b86b-4ab5-8a85-91dddde0df79","Type":"ContainerStarted","Data":"ad71bb8eb0ffd99e41c7d7c17aad77eea10eb90f2a2f436db2540d2cd6643d7f"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.608340 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" event={"ID":"62148cbe-9135-4627-8b05-05f8f4465d20","Type":"ContainerStarted","Data":"4b5d4d6ea020eccb6a420950926b75fd695b01daebef236bcd60887e75f6fa5a"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.618154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" event={"ID":"fc81cbc1-3a36-4bbc-8b30-198766877216","Type":"ContainerStarted","Data":"c3686e1514d9d2f11579fbc2c9d056394b9e6e171fc72c0ca5f0e601932d8fb2"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.626666 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" event={"ID":"8326b678-6225-4d2b-bec0-7486245510cb","Type":"ContainerStarted","Data":"e5dab187a5decd0e766b363c21571225137c3d60f1ee882ed498a35c93b9268f"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.629264 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.631701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" event={"ID":"4ef7ecc6-52d7-4c08-984b-f4fab28494d2","Type":"ContainerStarted","Data":"448f970c451ea5ee331ce24922b134b611091c3ac36a91895772f12716da7f01"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.639698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cn7fb" event={"ID":"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f","Type":"ContainerStarted","Data":"5a5b12cf48a117b5b3472e47eb9ecc3f900e79ea223d6ad900ecb542399e16f0"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.641026 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.646105 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-86pjh" event={"ID":"fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610","Type":"ContainerStarted","Data":"c60ea983a468885d4e30cb6b09c9aeecc4cf9ac4c78169137483566570bce65b"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.661035 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" event={"ID":"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb","Type":"ContainerStarted","Data":"9e66c5dbc3e294c9dd0252314a5baaa668a47dc21c116075941ad462ab27e81a"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.667093 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.668939 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.168918442 +0000 UTC m=+144.206009267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.681006 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.694627 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.694669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ktgts" event={"ID":"d7498c90-00fc-4024-8509-c135ff6ce906","Type":"ContainerStarted","Data":"4bf3cc54e3f568218671fcb96d403c963dc6cf8ff5292d6b542421b0a0c9a5f4"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.694689 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ktgts" event={"ID":"d7498c90-00fc-4024-8509-c135ff6ce906","Type":"ContainerStarted","Data":"53d6f6ec4d05b2866028ce5eb8bfe3ce639cacae7f801b14ee17902f48693d70"} Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.698516 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.716438 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.720824 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-99789"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.729626 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7"] Jan 24 03:44:06 crc kubenswrapper[4772]: W0124 03:44:06.743047 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode18f918d_3751_4397_8029_4b1a3bc87953.slice/crio-008d7d50967c24e0dc8dc1d625108fcba355ceed6bb18decd3dc600258f49af6 WatchSource:0}: Error finding container 008d7d50967c24e0dc8dc1d625108fcba355ceed6bb18decd3dc600258f49af6: Status 404 returned error can't find the container with id 008d7d50967c24e0dc8dc1d625108fcba355ceed6bb18decd3dc600258f49af6 Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.778150 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.779152 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.279130624 +0000 UTC m=+144.316221349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.786939 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.811243 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g767c"] Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.861260 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" event={"ID":"642240f6-95c9-4447-a5d0-d6c700550863","Type":"ContainerStarted","Data":"684243a3568f920f99c37baf21329f267022d2d23e4b2d8bc6e16645144ee4c1"} Jan 24 03:44:06 crc kubenswrapper[4772]: W0124 03:44:06.872303 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f342eb_5009_4dcc_a013_426ea46c1959.slice/crio-587b32ed2d9a7839e543216a34233cd91086ecbfd826d963e20c31bb33c3247c WatchSource:0}: Error finding container 587b32ed2d9a7839e543216a34233cd91086ecbfd826d963e20c31bb33c3247c: Status 404 returned error can't find the container with id 587b32ed2d9a7839e543216a34233cd91086ecbfd826d963e20c31bb33c3247c Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.879726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.880027 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.380015694 +0000 UTC m=+144.417106409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.981552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.981928 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.481900772 +0000 UTC m=+144.518991497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:06 crc kubenswrapper[4772]: I0124 03:44:06.982191 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:06 crc kubenswrapper[4772]: E0124 03:44:06.983618 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.48360908 +0000 UTC m=+144.520699805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.074612 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.087363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.087665 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.587649439 +0000 UTC m=+144.624740164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.089122 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.100062 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.104798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.131476 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.143679 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lfvrg"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.151984 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.155463 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.156584 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sflxf" podStartSLOduration=123.156567432 podStartE2EDuration="2m3.156567432s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.141361246 +0000 UTC m=+144.178451971" watchObservedRunningTime="2026-01-24 03:44:07.156567432 +0000 UTC m=+144.193658157" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.163008 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mk8n7"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.166303 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.191161 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.191649 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.691632856 +0000 UTC m=+144.728723581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.297148 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.297293 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.797259029 +0000 UTC m=+144.834349754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.297760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.298092 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.798077832 +0000 UTC m=+144.835168557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.342643 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jj66"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.345333 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" podStartSLOduration=123.345316747 podStartE2EDuration="2m3.345316747s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.343200438 +0000 UTC m=+144.380291163" watchObservedRunningTime="2026-01-24 03:44:07.345316747 +0000 UTC m=+144.382407472" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.357666 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-szx8j"] Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.379547 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv"] Jan 24 03:44:07 crc kubenswrapper[4772]: W0124 03:44:07.391165 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod427804d2_581e_4a93_ac0b_3e98b25182fc.slice/crio-88ef9668e9857540138e39cb86ed2f1acfa55942dc05b6e0ff55488586094d9c WatchSource:0}: Error finding container 88ef9668e9857540138e39cb86ed2f1acfa55942dc05b6e0ff55488586094d9c: Status 404 returned error can't find the container with id 88ef9668e9857540138e39cb86ed2f1acfa55942dc05b6e0ff55488586094d9c Jan 24 03:44:07 crc kubenswrapper[4772]: W0124 03:44:07.395054 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4074e9d5_7b79_41fe_9b4c_932a4bd47883.slice/crio-ad96f0f14320008e3cf480f4082437d9310438c318c9956c0a1941f2514d3977 WatchSource:0}: Error finding container ad96f0f14320008e3cf480f4082437d9310438c318c9956c0a1941f2514d3977: Status 404 returned error can't find the container with id ad96f0f14320008e3cf480f4082437d9310438c318c9956c0a1941f2514d3977 Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.400213 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.400820 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9j994"] Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.401521 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.901495413 +0000 UTC m=+144.938586138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.401634 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.402200 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:07.902192943 +0000 UTC m=+144.939283668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: W0124 03:44:07.410079 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735b86db_4d0b_4f1e_bfef_74f16fffa13d.slice/crio-6adf743eae4b8a070a97a5baf55ad057270352356506d4bac01b84cad1e080b2 WatchSource:0}: Error finding container 6adf743eae4b8a070a97a5baf55ad057270352356506d4bac01b84cad1e080b2: Status 404 returned error can't find the container with id 6adf743eae4b8a070a97a5baf55ad057270352356506d4bac01b84cad1e080b2 Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.420166 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-d8xg2" Jan 24 03:44:07 crc kubenswrapper[4772]: W0124 03:44:07.420268 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69494268_9917_4f39_a8d8_0a73e898ea6e.slice/crio-e7b0623a9b119326d3a1843e804700c28484c0e5b3befd593043a78f403c63b4 WatchSource:0}: Error finding container e7b0623a9b119326d3a1843e804700c28484c0e5b3befd593043a78f403c63b4: Status 404 returned error can't find the container with id e7b0623a9b119326d3a1843e804700c28484c0e5b3befd593043a78f403c63b4 Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.482015 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.489428 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:07 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:07 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:07 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.489680 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.502546 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.503003 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.002987001 +0000 UTC m=+145.040077726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.530109 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-l9vkq" podStartSLOduration=123.530093531 podStartE2EDuration="2m3.530093531s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.529317249 +0000 UTC m=+144.566407974" watchObservedRunningTime="2026-01-24 03:44:07.530093531 +0000 UTC m=+144.567184256" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.611715 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.612043 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.11203187 +0000 UTC m=+145.149122595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.637773 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p2fqb" podStartSLOduration=123.637756422 podStartE2EDuration="2m3.637756422s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.63626462 +0000 UTC m=+144.673355345" watchObservedRunningTime="2026-01-24 03:44:07.637756422 +0000 UTC m=+144.674847147" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.718849 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.719237 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.219222457 +0000 UTC m=+145.256313182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.741397 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ktgts" podStartSLOduration=123.741378849 podStartE2EDuration="2m3.741378849s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.70470168 +0000 UTC m=+144.741792405" watchObservedRunningTime="2026-01-24 03:44:07.741378849 +0000 UTC m=+144.778469564" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.820526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.820925 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.32091488 +0000 UTC m=+145.358005605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.831598 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6nbg6" podStartSLOduration=123.831584629 podStartE2EDuration="2m3.831584629s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:07.783676995 +0000 UTC m=+144.820767720" watchObservedRunningTime="2026-01-24 03:44:07.831584629 +0000 UTC m=+144.868675354" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.921472 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:07 crc kubenswrapper[4772]: E0124 03:44:07.922067 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.422052907 +0000 UTC m=+145.459143632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.981201 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" event={"ID":"9c289b5b-107c-4fce-8774-93bdf5001627","Type":"ContainerStarted","Data":"9384db820449ebc49e33fbd1832006e027a3ee188d2d8d09e8d548ca1a19c4e2"} Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.981991 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:07 crc kubenswrapper[4772]: I0124 03:44:07.989992 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" event={"ID":"9d623682-cd3a-44e9-b8de-1b4134bbb2b6","Type":"ContainerStarted","Data":"465645b6ff772143b7d046eb00e5c5513cefe93d0cd81ff54e68889a897e4abf"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:07.993844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" event={"ID":"4074e9d5-7b79-41fe-9b4c-932a4bd47883","Type":"ContainerStarted","Data":"ad96f0f14320008e3cf480f4082437d9310438c318c9956c0a1941f2514d3977"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:07.999049 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" event={"ID":"62148cbe-9135-4627-8b05-05f8f4465d20","Type":"ContainerStarted","Data":"4022268c11a68c18a294db25aa9b151a34aea440b57141bca5ad6c47926d3299"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.016353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9j994" event={"ID":"f6d4a349-9797-4909-b0fe-eca12c6f7435","Type":"ContainerStarted","Data":"dd0b57b10b28bc1600082ad416bd1029f0a01d33eb5e4e82d0b3f649a90f46db"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.026426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" event={"ID":"fc81cbc1-3a36-4bbc-8b30-198766877216","Type":"ContainerStarted","Data":"ec3e81eb1ed333701d6c586cdb4f8d69c55ce7f50d092a9b81f7605a53e73519"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.032478 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.032798 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.532784253 +0000 UTC m=+145.569874978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.072257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cn7fb" event={"ID":"22621d4a-b5a2-4e46-97b2-245d5a5e4c0f","Type":"ContainerStarted","Data":"4abe75a8f1adca80071bd3f8e2ec145cd660988ec40ff2357518e640c0bb0cbd"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.074778 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" event={"ID":"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5","Type":"ContainerStarted","Data":"af393358711874a7e4a41a991d84a0a930a743d831db14b2d4b58ae32410df16"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.077502 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" event={"ID":"55c2640f-1744-4c8e-a11c-85efa6215c47","Type":"ContainerStarted","Data":"d881c2ae27228a4fe6d3304aebdf5ffb68980c9200cbe44ccf1b692119d1e276"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.082185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" event={"ID":"3c951b6b-540d-4a3d-aeae-33ad49519b13","Type":"ContainerStarted","Data":"bdb92ebf740b5e3b0d8cd4a975d151540f8e6f787cf9873f863eb67febbb0959"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.082207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" event={"ID":"3c951b6b-540d-4a3d-aeae-33ad49519b13","Type":"ContainerStarted","Data":"6a731c125559d5f2d3d1313c1b51edf211cc197d031ce096a1f4c33b17397744"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.082605 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.088684 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cn7fb" podStartSLOduration=6.088665851 podStartE2EDuration="6.088665851s" podCreationTimestamp="2026-01-24 03:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.088487346 +0000 UTC m=+145.125578071" watchObservedRunningTime="2026-01-24 03:44:08.088665851 +0000 UTC m=+145.125756576" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.089539 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" podStartSLOduration=124.089534196 podStartE2EDuration="2m4.089534196s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.014996274 +0000 UTC m=+145.052086999" watchObservedRunningTime="2026-01-24 03:44:08.089534196 +0000 UTC m=+145.126624921" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.101240 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lfvrg" event={"ID":"427804d2-581e-4a93-ac0b-3e98b25182fc","Type":"ContainerStarted","Data":"88ef9668e9857540138e39cb86ed2f1acfa55942dc05b6e0ff55488586094d9c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.106217 4772 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b2qsl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.106268 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" podUID="3c951b6b-540d-4a3d-aeae-33ad49519b13" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.120233 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" podStartSLOduration=124.120216606 podStartE2EDuration="2m4.120216606s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.119937118 +0000 UTC m=+145.157027843" watchObservedRunningTime="2026-01-24 03:44:08.120216606 +0000 UTC m=+145.157307331" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.125306 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" event={"ID":"735b86db-4d0b-4f1e-bfef-74f16fffa13d","Type":"ContainerStarted","Data":"6adf743eae4b8a070a97a5baf55ad057270352356506d4bac01b84cad1e080b2"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.133419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.134128 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.634113836 +0000 UTC m=+145.671204561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.171147 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" event={"ID":"e6f342eb-5009-4dcc-a013-426ea46c1959","Type":"ContainerStarted","Data":"b336202ff569c91719ea439cc4ae6aca80d46b18a9503e169bcc1f7e6dfee2b4"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.171202 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" event={"ID":"e6f342eb-5009-4dcc-a013-426ea46c1959","Type":"ContainerStarted","Data":"587b32ed2d9a7839e543216a34233cd91086ecbfd826d963e20c31bb33c3247c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.177851 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" event={"ID":"dfc35e52-c6b0-4011-89fa-a3b77f5ef706","Type":"ContainerStarted","Data":"1756dce1f9187b1771db20b9e59498892f53b40c8762a698dfd8caaac8e89920"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.184797 4772 generic.go:334] "Generic (PLEG): container finished" podID="58f2c7e6-4613-44d2-a060-2abdf10e01a2" containerID="e36cdf31cb99fa630c12d4374f6820c492835003564fdfe27e8b7c01a3317da0" exitCode=0 Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.185484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" event={"ID":"58f2c7e6-4613-44d2-a060-2abdf10e01a2","Type":"ContainerDied","Data":"e36cdf31cb99fa630c12d4374f6820c492835003564fdfe27e8b7c01a3317da0"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.194924 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9czt7" podStartSLOduration=124.194904661 podStartE2EDuration="2m4.194904661s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.193937684 +0000 UTC m=+145.231028409" watchObservedRunningTime="2026-01-24 03:44:08.194904661 +0000 UTC m=+145.231995386" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.205989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" event={"ID":"2ed4e912-e375-41c4-a319-a360e33e8fde","Type":"ContainerStarted","Data":"54ce1f99af4983fc2717ab5edc38ebf91754a43a99a572246448cc72075f2bbd"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.234905 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.236275 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.736261651 +0000 UTC m=+145.773352376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.271140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" event={"ID":"26bbc842-3687-4b32-8530-6ae150b7126b","Type":"ContainerStarted","Data":"2618df2af1e7265b093dd82f98149a57bb8502f685c27b47ca4bf0006de4e070"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.279704 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" event={"ID":"0f9d3510-946e-49b9-bd37-aa49de76ee43","Type":"ContainerStarted","Data":"72647aa4c00e7521fe8b0d074e29c2cf7ae10a6168f99de5a51e59fe03c75641"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.293124 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" event={"ID":"f22ad02a-18c0-4c78-ba2e-424e8431ecb0","Type":"ContainerStarted","Data":"078c8ac89189e0cc9bd94286acb976bcb0cf956cf53c1c40c3b375917e54270a"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.293180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" event={"ID":"f22ad02a-18c0-4c78-ba2e-424e8431ecb0","Type":"ContainerStarted","Data":"5c05a6afd75d2832f15f57d4f4ea0e69e631400042b3c7eafdc7f07d1ccf67bb"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.302978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" event={"ID":"96315b77-d775-4329-8a85-0fd2705bf278","Type":"ContainerStarted","Data":"efe3cedec31e3845ade355bfeda527d54a01389c756160c4995a79a1f3d6e482"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.303042 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" event={"ID":"96315b77-d775-4329-8a85-0fd2705bf278","Type":"ContainerStarted","Data":"87c13c7fe7a98d38e4cb6ee55cf4179b360fa15d120dbcf5b159a35174bcc33c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.317192 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" event={"ID":"74ba3508-b86b-4ab5-8a85-91dddde0df79","Type":"ContainerStarted","Data":"34732accd8ffe3ffa6631557b9711e7a28f50507e1f0a144d02fcb93a6680d2c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.335443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.336759 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.83672275 +0000 UTC m=+145.873813475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.340463 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" podStartSLOduration=124.34030678 podStartE2EDuration="2m4.34030678s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.33744067 +0000 UTC m=+145.374531395" watchObservedRunningTime="2026-01-24 03:44:08.34030678 +0000 UTC m=+145.377397505" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.345593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" event={"ID":"6b82cfc0-bf71-4a95-8797-f92690f9a2b0","Type":"ContainerStarted","Data":"19d5b2453331cfbc16ed008a76543c71efb9568fe88076c1b1f25fe0d39bf751"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.365255 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" event={"ID":"4ef7ecc6-52d7-4c08-984b-f4fab28494d2","Type":"ContainerStarted","Data":"dd82a25e86de4198decd7bb845f43a7d6ce9043f3043930bb4fd9c3fb82e670c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.373114 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" event={"ID":"f98b89d5-0baf-4892-8d9b-44e64a3d793b","Type":"ContainerStarted","Data":"fabbf4aa6c892e1bbeb9b9a2c3b7cf6727abd839757a96ced019d5a1a75cb9bf"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.374495 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xgt94" podStartSLOduration=125.374482899 podStartE2EDuration="2m5.374482899s" podCreationTimestamp="2026-01-24 03:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.37165123 +0000 UTC m=+145.408741955" watchObservedRunningTime="2026-01-24 03:44:08.374482899 +0000 UTC m=+145.411573614" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.375963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerStarted","Data":"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.375989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerStarted","Data":"008d7d50967c24e0dc8dc1d625108fcba355ceed6bb18decd3dc600258f49af6"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.376869 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.378033 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" event={"ID":"26a10b7a-8796-4116-8e3b-c8a3f4e06edb","Type":"ContainerStarted","Data":"34e512dfda2c45049bd6f4c3016bf8f40e7cc277cb304d1d6cbc7ee7bd972c63"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.379477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" event={"ID":"69494268-9917-4f39-a8d8-0a73e898ea6e","Type":"ContainerStarted","Data":"e7b0623a9b119326d3a1843e804700c28484c0e5b3befd593043a78f403c63b4"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.383646 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8vwrd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.383707 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.387695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" event={"ID":"e9bc8517-d223-42cf-9c6d-5e96dbc58e27","Type":"ContainerStarted","Data":"575bec079912063de13d5dad1e6f721fc20fd5c49bb5af05b32833adf5b23d79"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.398613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-86pjh" event={"ID":"fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610","Type":"ContainerStarted","Data":"b21e7f4b0ad910a08d413bf1f4e5c5e3d2d7cbafe6bbe6ffafe06011c6f53466"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.399275 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.406073 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-86pjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.406126 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-86pjh" podUID="fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.410271 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" event={"ID":"fbf79dca-3857-4554-9b1a-8b98d98c88ad","Type":"ContainerStarted","Data":"87fde037a411221e3bffa919d3324433a80d5abf42a87c0fad7e8b375f5c1c9e"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.411175 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.413847 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" event={"ID":"d1fa00f5-8e93-4be6-aaff-228ccd9584f4","Type":"ContainerStarted","Data":"716f6bdfe71bfe389f9cc74786a7220d11af2148cdcff3629b417552ebbe7980"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.421945 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" event={"ID":"0a7e7f62-aefc-4d8c-a87e-acdb182280fd","Type":"ContainerStarted","Data":"e27409ffda414f0b2f81fc768c66a2c549c8eda7bf7fac9521b2cf6c42121923"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.422009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" event={"ID":"0a7e7f62-aefc-4d8c-a87e-acdb182280fd","Type":"ContainerStarted","Data":"b899d8dea48a7ec0c0395409fda2c748f75d8de80ad905c1a74e682ed8198182"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.427295 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.436478 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" event={"ID":"249748cf-cf33-40d6-bbe4-74af2c05395c","Type":"ContainerStarted","Data":"ed029eeebbede2b89b4864463d8a69430ac01d9b16bc2255cc6e1935c8ffcb96"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.437120 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.437457 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:08.937448115 +0000 UTC m=+145.974538840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.472133 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ttxxp" podStartSLOduration=124.472109538 podStartE2EDuration="2m4.472109538s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.405462468 +0000 UTC m=+145.442553193" watchObservedRunningTime="2026-01-24 03:44:08.472109538 +0000 UTC m=+145.509200263" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.487095 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:08 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:08 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:08 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.487444 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.498277 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mq527" event={"ID":"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21","Type":"ContainerStarted","Data":"73235f16ef1d254683fb42f68875e5ba70680e9126974b2f667b24d37d9abb1a"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.499642 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-86pjh" podStartSLOduration=124.499621149 podStartE2EDuration="2m4.499621149s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.467896389 +0000 UTC m=+145.504987114" watchObservedRunningTime="2026-01-24 03:44:08.499621149 +0000 UTC m=+145.536711874" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.511605 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" event={"ID":"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb","Type":"ContainerStarted","Data":"506e7aa95a2e210b583d4102be5cbbd0e838f296cc574d72ab68708095b6543d"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.511643 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" event={"ID":"6ada6c66-3eb9-4d60-9f0c-654ee96e06cb","Type":"ContainerStarted","Data":"0ff7140705e25651cc83475b68c57f997b60753300e732d6c762e97852869f99"} Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.516828 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" podStartSLOduration=124.516809562 podStartE2EDuration="2m4.516809562s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.498548229 +0000 UTC m=+145.535638954" watchObservedRunningTime="2026-01-24 03:44:08.516809562 +0000 UTC m=+145.553900277" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.539714 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.540981 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.040949699 +0000 UTC m=+146.078040424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.544280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.568480 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.068464381 +0000 UTC m=+146.105555106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.605659 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" podStartSLOduration=124.605632783 podStartE2EDuration="2m4.605632783s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.535070144 +0000 UTC m=+145.572160869" watchObservedRunningTime="2026-01-24 03:44:08.605632783 +0000 UTC m=+145.642723508" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.607518 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" podStartSLOduration=124.607512716 podStartE2EDuration="2m4.607512716s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.605424537 +0000 UTC m=+145.642515262" watchObservedRunningTime="2026-01-24 03:44:08.607512716 +0000 UTC m=+145.644603431" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.632483 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" podStartSLOduration=124.632462256 podStartE2EDuration="2m4.632462256s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.630992585 +0000 UTC m=+145.668083310" watchObservedRunningTime="2026-01-24 03:44:08.632462256 +0000 UTC m=+145.669553001" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.645624 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.645776 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.145752409 +0000 UTC m=+146.182843134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.646396 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.646697 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.146688865 +0000 UTC m=+146.183779580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.657304 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wpvk5" podStartSLOduration=124.657285692 podStartE2EDuration="2m4.657285692s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:08.655996926 +0000 UTC m=+145.693087651" watchObservedRunningTime="2026-01-24 03:44:08.657285692 +0000 UTC m=+145.694376407" Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.747399 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.747973 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.247960696 +0000 UTC m=+146.285051421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.853344 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.854005 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.353994299 +0000 UTC m=+146.391085024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.954339 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.954476 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.454448908 +0000 UTC m=+146.491539633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.954603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:08 crc kubenswrapper[4772]: E0124 03:44:08.955048 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.455042134 +0000 UTC m=+146.492132859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.985217 4772 csr.go:261] certificate signing request csr-s8vtq is approved, waiting to be issued Jan 24 03:44:08 crc kubenswrapper[4772]: I0124 03:44:08.987891 4772 csr.go:257] certificate signing request csr-s8vtq is issued Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.062365 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.062671 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.562653734 +0000 UTC m=+146.599744459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.169475 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.170088 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.670077007 +0000 UTC m=+146.707167732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.272645 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.273036 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.773021315 +0000 UTC m=+146.810112040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.379462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.380368 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.880355266 +0000 UTC m=+146.917445991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.481952 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.482799 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:09.98277567 +0000 UTC m=+147.019866395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.487215 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:09 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:09 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:09 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.487282 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.554763 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-crfdt" event={"ID":"0a7e7f62-aefc-4d8c-a87e-acdb182280fd","Type":"ContainerStarted","Data":"4a50d41a40a2d2ba6f931da0437888d2793435975e27f8aba4300a9d91d4dda5"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.560631 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" event={"ID":"62148cbe-9135-4627-8b05-05f8f4465d20","Type":"ContainerStarted","Data":"e0172404737feb60e73569c13b3ca7e223f56cef839a795e4c473e4cd9563eea"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.562081 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" event={"ID":"26a10b7a-8796-4116-8e3b-c8a3f4e06edb","Type":"ContainerStarted","Data":"caa093bbb934d85f9bb1ae8d0a18e568f1b14e2ecbd52b90232a52d2091a3ee2"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.563765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" event={"ID":"249748cf-cf33-40d6-bbe4-74af2c05395c","Type":"ContainerStarted","Data":"3623b16c0adb5c99affdca9e352708a6b9a4de6f3ed8ab114637a6e1d9bf4d1d"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.576820 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" event={"ID":"f98b89d5-0baf-4892-8d9b-44e64a3d793b","Type":"ContainerStarted","Data":"d492809874d7ce319af5392567e90f2a2b3b3c60c11c4fd9c3e62f88eff28226"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.576871 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" event={"ID":"f98b89d5-0baf-4892-8d9b-44e64a3d793b","Type":"ContainerStarted","Data":"6111d98687ceccf51809ecc71c67f2932b31e10c8e3c210978a53d3c899a4d71"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.578821 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f88sl" event={"ID":"e9bc8517-d223-42cf-9c6d-5e96dbc58e27","Type":"ContainerStarted","Data":"9de59b4374e5a2406f494fd1a19bfa63d44db65d4208bacdf48fa1618d7b504c"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.580916 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" event={"ID":"735b86db-4d0b-4f1e-bfef-74f16fffa13d","Type":"ContainerStarted","Data":"470e14b81147c28cff1efb5263f67acb0f03dae60f4e67f0e91b9a75f9e4e0ab"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.580965 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" event={"ID":"735b86db-4d0b-4f1e-bfef-74f16fffa13d","Type":"ContainerStarted","Data":"fda083b0677090e848021ce4c8d9555dcf1a30fef7da5e05cc7bf93ee46944a8"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.584197 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.586047 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.086030497 +0000 UTC m=+147.123121222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.588323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" event={"ID":"0f9d3510-946e-49b9-bd37-aa49de76ee43","Type":"ContainerStarted","Data":"88cbe7233dfaa290ac7e7b346b85058c4670aa64ac2d53b00cad6c466f3b927c"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.590686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" event={"ID":"2ed4e912-e375-41c4-a319-a360e33e8fde","Type":"ContainerStarted","Data":"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.591326 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.593331 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mk8n7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.593373 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.600175 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" event={"ID":"d1fa00f5-8e93-4be6-aaff-228ccd9584f4","Type":"ContainerStarted","Data":"a1b952aea7946f86e26e82f8f75189002499f859112f9f31cdef21a7929ad6cf"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.606663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" event={"ID":"6b82cfc0-bf71-4a95-8797-f92690f9a2b0","Type":"ContainerStarted","Data":"67627cdc5086f26b1a8ffaefd1f8a06728f4f46dbf0ecbee8b6df77b810c77ff"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.607039 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.609325 4772 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mshdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.609363 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.610510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" event={"ID":"26bbc842-3687-4b32-8530-6ae150b7126b","Type":"ContainerStarted","Data":"37c17428d73f53bdbf11e3b13cda62fe664b6a3bb31f421ef10a18c89535b24d"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.610556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" event={"ID":"26bbc842-3687-4b32-8530-6ae150b7126b","Type":"ContainerStarted","Data":"b70ef5efdefbc1ca3694999682dd4df4d3f99a9db42a0af3c56beef0f9352eee"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.618151 4772 generic.go:334] "Generic (PLEG): container finished" podID="69494268-9917-4f39-a8d8-0a73e898ea6e" containerID="d5f4da0cae883987229c5fb06a011ab9124db604de98073b4372eb12a9919f44" exitCode=0 Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.618224 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" event={"ID":"69494268-9917-4f39-a8d8-0a73e898ea6e","Type":"ContainerDied","Data":"d5f4da0cae883987229c5fb06a011ab9124db604de98073b4372eb12a9919f44"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.629082 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lpsdw" podStartSLOduration=125.629056704 podStartE2EDuration="2m5.629056704s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.626521183 +0000 UTC m=+146.663611908" watchObservedRunningTime="2026-01-24 03:44:09.629056704 +0000 UTC m=+146.666147429" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.634515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mq527" event={"ID":"11840fdc-3cd8-4efe-bf63-68cfc5ab4f21","Type":"ContainerStarted","Data":"30950d838d00915fabf3a14e5811ce813a5f01c8a215c152d6662c6e8ff07b0f"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.634573 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mq527" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.640632 4772 generic.go:334] "Generic (PLEG): container finished" podID="dfc35e52-c6b0-4011-89fa-a3b77f5ef706" containerID="11812af5f7158c5bca6549700dab142af77f4ba69bdfee4b9a078452d1718733" exitCode=0 Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.640995 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" event={"ID":"dfc35e52-c6b0-4011-89fa-a3b77f5ef706","Type":"ContainerDied","Data":"11812af5f7158c5bca6549700dab142af77f4ba69bdfee4b9a078452d1718733"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.654555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" event={"ID":"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5","Type":"ContainerStarted","Data":"a03156436edaaf06b74b3996f185d3d70e8ad4bb40526191d60744443149958c"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.678588 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" event={"ID":"58f2c7e6-4613-44d2-a060-2abdf10e01a2","Type":"ContainerStarted","Data":"bdb77a9a8b55d7d337f8ba0a5ab3cdb71fe7f66a03f0b54eef263dc82f8c6833"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.689314 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.690681 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.190660572 +0000 UTC m=+147.227751297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.697175 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" podStartSLOduration=125.697155894 podStartE2EDuration="2m5.697155894s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.674681984 +0000 UTC m=+146.711772709" watchObservedRunningTime="2026-01-24 03:44:09.697155894 +0000 UTC m=+146.734246619" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.703776 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" podStartSLOduration=125.703760919 podStartE2EDuration="2m5.703760919s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.696361982 +0000 UTC m=+146.733452707" watchObservedRunningTime="2026-01-24 03:44:09.703760919 +0000 UTC m=+146.740851644" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.735370 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvdjm" podStartSLOduration=125.735349296 podStartE2EDuration="2m5.735349296s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.72302044 +0000 UTC m=+146.760111165" watchObservedRunningTime="2026-01-24 03:44:09.735349296 +0000 UTC m=+146.772440021" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.745848 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" event={"ID":"f22ad02a-18c0-4c78-ba2e-424e8431ecb0","Type":"ContainerStarted","Data":"4fa1fa526c8d8d77bd4d301510f5ff985b7e12e1d9d03b87ed61338ad47269dd"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.751504 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lfvrg" event={"ID":"427804d2-581e-4a93-ac0b-3e98b25182fc","Type":"ContainerStarted","Data":"c185618c79be21e751eacbcfe4cb8bfb83040563130b7621a33e9cf306817306"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.758747 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" event={"ID":"9d623682-cd3a-44e9-b8de-1b4134bbb2b6","Type":"ContainerStarted","Data":"8042294434f13d8b9f5612854924925d161816a3209b68edae8e59b56cdae24b"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.785415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" event={"ID":"4074e9d5-7b79-41fe-9b4c-932a4bd47883","Type":"ContainerStarted","Data":"ee81557590f2bfeb74b9d0603f704e25be2e8474fe66bbf269dabe7c4a281c86"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.787283 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-429g4" podStartSLOduration=125.787266732 podStartE2EDuration="2m5.787266732s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.739336057 +0000 UTC m=+146.776426782" watchObservedRunningTime="2026-01-24 03:44:09.787266732 +0000 UTC m=+146.824357457" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.787877 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.792088 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.795559 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.295545254 +0000 UTC m=+147.332635979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.795858 4772 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cm82q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.795918 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" podUID="4074e9d5-7b79-41fe-9b4c-932a4bd47883" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.798961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" event={"ID":"fc81cbc1-3a36-4bbc-8b30-198766877216","Type":"ContainerStarted","Data":"d5a1c2762eaf1f7ccdc1e44f8a3aa63355ab47553c5436d5eee12d28760f67ed"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.804174 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" podStartSLOduration=125.804131805 podStartE2EDuration="2m5.804131805s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.785516423 +0000 UTC m=+146.822607148" watchObservedRunningTime="2026-01-24 03:44:09.804131805 +0000 UTC m=+146.841222540" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.805493 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9j994" event={"ID":"f6d4a349-9797-4909-b0fe-eca12c6f7435","Type":"ContainerStarted","Data":"4dbcdf1cffe595a9320bf163d5a4d9fe03848b5e43675ed55f53989ac734c774"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.811074 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8jvkg" podStartSLOduration=125.81105862 podStartE2EDuration="2m5.81105862s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.810608807 +0000 UTC m=+146.847699532" watchObservedRunningTime="2026-01-24 03:44:09.81105862 +0000 UTC m=+146.848149345" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.814212 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-86pjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.814269 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-86pjh" podUID="fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.814389 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" event={"ID":"55c2640f-1744-4c8e-a11c-85efa6215c47","Type":"ContainerStarted","Data":"523f4d035eb8155872d3e34278d3591399e472e78291f8b80ce8cb7b25701052"} Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.814431 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.815809 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8vwrd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.815847 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.841774 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b2qsl" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.852054 4772 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zgm46 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.852109 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" podUID="55c2640f-1744-4c8e-a11c-85efa6215c47" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.853013 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-99789" podStartSLOduration=125.852996966 podStartE2EDuration="2m5.852996966s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.852632876 +0000 UTC m=+146.889723601" watchObservedRunningTime="2026-01-24 03:44:09.852996966 +0000 UTC m=+146.890087691" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.896266 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:09 crc kubenswrapper[4772]: E0124 03:44:09.896595 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.396578679 +0000 UTC m=+147.433669404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.946541 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7n9pf" podStartSLOduration=125.94652451 podStartE2EDuration="2m5.94652451s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.914769099 +0000 UTC m=+146.951859824" watchObservedRunningTime="2026-01-24 03:44:09.94652451 +0000 UTC m=+146.983615235" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.973323 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fcfgv" podStartSLOduration=126.973303571 podStartE2EDuration="2m6.973303571s" podCreationTimestamp="2026-01-24 03:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.946294583 +0000 UTC m=+146.983385308" watchObservedRunningTime="2026-01-24 03:44:09.973303571 +0000 UTC m=+147.010394296" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.998797 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-g767c" podStartSLOduration=125.998778156 podStartE2EDuration="2m5.998778156s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.974790073 +0000 UTC m=+147.011880798" watchObservedRunningTime="2026-01-24 03:44:09.998778156 +0000 UTC m=+147.035868881" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.999025 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lfvrg" podStartSLOduration=7.999018012 podStartE2EDuration="7.999018012s" podCreationTimestamp="2026-01-24 03:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:09.99608632 +0000 UTC m=+147.033177045" watchObservedRunningTime="2026-01-24 03:44:09.999018012 +0000 UTC m=+147.036108737" Jan 24 03:44:09 crc kubenswrapper[4772]: I0124 03:44:09.999149 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.002217 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.502204802 +0000 UTC m=+147.539295527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.003707 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-24 03:39:08 +0000 UTC, rotation deadline is 2026-11-25 12:14:28.463996897 +0000 UTC Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.003767 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7328h30m18.460233442s for next certificate rotation Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.031696 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" podStartSLOduration=126.031679298 podStartE2EDuration="2m6.031679298s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.030094944 +0000 UTC m=+147.067185669" watchObservedRunningTime="2026-01-24 03:44:10.031679298 +0000 UTC m=+147.068770023" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.100113 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.100299 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.600255992 +0000 UTC m=+147.637346717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.100549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.101090 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.601065555 +0000 UTC m=+147.638156280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.114300 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wwhc2" podStartSLOduration=126.114285546 podStartE2EDuration="2m6.114285546s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.086095955 +0000 UTC m=+147.123186680" watchObservedRunningTime="2026-01-24 03:44:10.114285546 +0000 UTC m=+147.151376271" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.115236 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" podStartSLOduration=126.115231642 podStartE2EDuration="2m6.115231642s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.11442529 +0000 UTC m=+147.151516015" watchObservedRunningTime="2026-01-24 03:44:10.115231642 +0000 UTC m=+147.152322357" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.145375 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mq527" podStartSLOduration=8.145357988 podStartE2EDuration="8.145357988s" podCreationTimestamp="2026-01-24 03:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.140960304 +0000 UTC m=+147.178051029" watchObservedRunningTime="2026-01-24 03:44:10.145357988 +0000 UTC m=+147.182448713" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.204466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.204814 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.704799265 +0000 UTC m=+147.741889990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.220723 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9h8qp" podStartSLOduration=126.220708431 podStartE2EDuration="2m6.220708431s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.219640441 +0000 UTC m=+147.256731176" watchObservedRunningTime="2026-01-24 03:44:10.220708431 +0000 UTC m=+147.257799156" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.221616 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9j994" podStartSLOduration=126.221610977 podStartE2EDuration="2m6.221610977s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.191926884 +0000 UTC m=+147.229017649" watchObservedRunningTime="2026-01-24 03:44:10.221610977 +0000 UTC m=+147.258701702" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.306400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.306824 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.806806407 +0000 UTC m=+147.843897122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.406941 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.407185 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.907136681 +0000 UTC m=+147.944227406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.407379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.407726 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:10.907708177 +0000 UTC m=+147.944798902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.490290 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:10 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:10 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:10 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.490936 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.508809 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.509068 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.00903761 +0000 UTC m=+148.046128335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.509477 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.509828 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.009821342 +0000 UTC m=+148.046912067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.610831 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.611020 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.11099308 +0000 UTC m=+148.148083805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.611443 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.611806 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.111795992 +0000 UTC m=+148.148886717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.712667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.712872 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.212846307 +0000 UTC m=+148.249937032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.713024 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.713289 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.213278029 +0000 UTC m=+148.250368754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.814427 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.814632 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.314606572 +0000 UTC m=+148.351697297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.815047 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.815437 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.315418935 +0000 UTC m=+148.352509660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.868413 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" event={"ID":"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5","Type":"ContainerStarted","Data":"993591e189dc298c55076152b360394fb8b291af8429d3735dea86d4bea6aa5c"} Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.888711 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" event={"ID":"dfc35e52-c6b0-4011-89fa-a3b77f5ef706","Type":"ContainerStarted","Data":"bd5ef75a558134dbaaa067968bb27d6a8ed50c26b0e49e7cd9102b99d0c09eda"} Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.905294 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" event={"ID":"58f2c7e6-4613-44d2-a060-2abdf10e01a2","Type":"ContainerStarted","Data":"17d50cd345eebc3ea4eafbf318fa9d4c2b40229c40b2917e6c1e2eb35aa2464e"} Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.916831 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.917017 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.416991794 +0000 UTC m=+148.454082519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.917231 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:10 crc kubenswrapper[4772]: E0124 03:44:10.917639 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.417629052 +0000 UTC m=+148.454719947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.918655 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" event={"ID":"69494268-9917-4f39-a8d8-0a73e898ea6e","Type":"ContainerStarted","Data":"3d0596e605a687e7d16817463cc2a5dcb710a3086a21d0d13f8e5809fcc370d1"} Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.921328 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-86pjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.921457 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-86pjh" podUID="fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.933063 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cm82q" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.948399 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zgm46" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.979951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:44:10 crc kubenswrapper[4772]: I0124 03:44:10.986328 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" podStartSLOduration=126.986314579 podStartE2EDuration="2m6.986314579s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:10.985052483 +0000 UTC m=+148.022143208" watchObservedRunningTime="2026-01-24 03:44:10.986314579 +0000 UTC m=+148.023405304" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.017820 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.017915 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.517898425 +0000 UTC m=+148.554989150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.021277 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.027423 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.527399411 +0000 UTC m=+148.564490136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.051182 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.122831 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.122929 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.622913651 +0000 UTC m=+148.660004376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.123101 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.123356 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.623349083 +0000 UTC m=+148.660439808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.194530 4772 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.225035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.225368 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.725352154 +0000 UTC m=+148.762442879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.261590 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" podStartSLOduration=127.26157361 podStartE2EDuration="2m7.26157361s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:11.198723477 +0000 UTC m=+148.235814202" watchObservedRunningTime="2026-01-24 03:44:11.26157361 +0000 UTC m=+148.298664335" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.280416 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" podStartSLOduration=128.280383018 podStartE2EDuration="2m8.280383018s" podCreationTimestamp="2026-01-24 03:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:11.263960037 +0000 UTC m=+148.301050762" watchObservedRunningTime="2026-01-24 03:44:11.280383018 +0000 UTC m=+148.317473743" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.327029 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.327880 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.82786271 +0000 UTC m=+148.864953435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.428825 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.429080 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:11.929034108 +0000 UTC m=+148.966124833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.429259 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.437037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.483245 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:11 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:11 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:11 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.483606 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.532302 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.532468 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.532517 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.532557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.533010 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.533086 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.033062286 +0000 UTC m=+149.070153011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.535804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.536378 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.584238 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.597683 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.627628 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.633112 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.633285 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.133269207 +0000 UTC m=+149.170359932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.633555 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.633862 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.133855194 +0000 UTC m=+149.170945919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.697999 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.714869 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.734404 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.735113 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.735270 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.235244608 +0000 UTC m=+149.272335333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.735545 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.735726 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.735869 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.235856945 +0000 UTC m=+149.272947670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.747852 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.765658 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.837140 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.837309 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.33726869 +0000 UTC m=+149.374359425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.837599 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.837713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hwwf\" (UniqueName: \"kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.837836 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.837919 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: E0124 03:44:11.837955 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 03:44:12.337944999 +0000 UTC m=+149.375035734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2sfxs" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.853879 4772 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-24T03:44:11.194561341Z","Handler":null,"Name":""} Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.863675 4772 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.863928 4772 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.940301 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.940703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.940810 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hwwf\" (UniqueName: \"kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.940866 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.941849 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.942164 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.944256 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.945145 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.952188 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.953805 4772 generic.go:334] "Generic (PLEG): container finished" podID="96315b77-d775-4329-8a85-0fd2705bf278" containerID="efe3cedec31e3845ade355bfeda527d54a01389c756160c4995a79a1f3d6e482" exitCode=0 Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.953866 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" event={"ID":"96315b77-d775-4329-8a85-0fd2705bf278","Type":"ContainerDied","Data":"efe3cedec31e3845ade355bfeda527d54a01389c756160c4995a79a1f3d6e482"} Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.972619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.976004 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 03:44:11 crc kubenswrapper[4772]: I0124 03:44:11.982173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hwwf\" (UniqueName: \"kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf\") pod \"community-operators-j67dm\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.001212 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" event={"ID":"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5","Type":"ContainerStarted","Data":"b0dd7e7af30decdb70d884df0856b7a601bf7897c6954d8d131149e33941acda"} Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.001276 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" event={"ID":"bcaaefc4-1474-44e9-b64f-2e495e7f8cc5","Type":"ContainerStarted","Data":"3f38aef220f838653d8097845561e360b996c85a62be39f4d890bd510b7dfa58"} Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.041488 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.041545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.041597 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.041629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjs8h\" (UniqueName: \"kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.042869 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-szx8j" podStartSLOduration=10.042836537 podStartE2EDuration="10.042836537s" podCreationTimestamp="2026-01-24 03:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:12.041837279 +0000 UTC m=+149.078928004" watchObservedRunningTime="2026-01-24 03:44:12.042836537 +0000 UTC m=+149.079927262" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.048631 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.048675 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.053615 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.130613 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.131515 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.145348 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.145540 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjs8h\" (UniqueName: \"kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.145767 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.180186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.183502 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.193193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.214702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2sfxs\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.247538 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.247604 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.247636 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.255289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjs8h\" (UniqueName: \"kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h\") pod \"certified-operators-jgctw\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.298023 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.325297 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.328913 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.359286 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.359395 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.359418 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.360034 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.360497 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.379297 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.397951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f\") pod \"community-operators-prjh9\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.462936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqdvx\" (UniqueName: \"kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.463020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.463049 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.488148 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:12 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:12 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:12 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.488220 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.498472 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:12 crc kubenswrapper[4772]: W0124 03:44:12.528336 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-816d98d2dba6436de1e40ba1e280833519ccce04e6c754ad81fd2a59d0619b9f WatchSource:0}: Error finding container 816d98d2dba6436de1e40ba1e280833519ccce04e6c754ad81fd2a59d0619b9f: Status 404 returned error can't find the container with id 816d98d2dba6436de1e40ba1e280833519ccce04e6c754ad81fd2a59d0619b9f Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.538785 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.565001 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jj66" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.565717 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqdvx\" (UniqueName: \"kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.565776 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.565798 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.566237 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.566441 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.623778 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqdvx\" (UniqueName: \"kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx\") pod \"certified-operators-fqbcg\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: W0124 03:44:12.654418 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0320d4a1981c41b03a22e323682fe129503a0abb14c446b9df3bc377a0c3a213 WatchSource:0}: Error finding container 0320d4a1981c41b03a22e323682fe129503a0abb14c446b9df3bc377a0c3a213: Status 404 returned error can't find the container with id 0320d4a1981c41b03a22e323682fe129503a0abb14c446b9df3bc377a0c3a213 Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.737568 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.789182 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:44:12 crc kubenswrapper[4772]: I0124 03:44:12.902081 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.042598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerStarted","Data":"792ee49ebe0fbee04d317205b0605fba443804dbff5cc7a055c580483bb486bf"} Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.059491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0320d4a1981c41b03a22e323682fe129503a0abb14c446b9df3bc377a0c3a213"} Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.067590 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerStarted","Data":"0faac75f23d5e2504113924538452f73b0f9283f89ae4747c782458eaf027a00"} Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.069222 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"83e6d9ecc6eb09b7e8f8bfee4cd12de886f2f2d50b11bdf63d78419c3d2f3c9d"} Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.073884 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"816d98d2dba6436de1e40ba1e280833519ccce04e6c754ad81fd2a59d0619b9f"} Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.267441 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.326150 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.377876 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.399620 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.400327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.402555 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.402866 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.413003 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.483347 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:13 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:13 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:13 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.483912 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.494118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.494216 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.596452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.596529 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.597571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.640501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.677379 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.730498 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.731802 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.734675 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.736973 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.742993 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.778592 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.804731 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.804799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.804854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq778\" (UniqueName: \"kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.905694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfvhs\" (UniqueName: \"kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs\") pod \"96315b77-d775-4329-8a85-0fd2705bf278\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.905817 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume\") pod \"96315b77-d775-4329-8a85-0fd2705bf278\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.905881 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume\") pod \"96315b77-d775-4329-8a85-0fd2705bf278\" (UID: \"96315b77-d775-4329-8a85-0fd2705bf278\") " Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.906035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.906054 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.906100 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq778\" (UniqueName: \"kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.907438 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.907815 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.907915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume" (OuterVolumeSpecName: "config-volume") pod "96315b77-d775-4329-8a85-0fd2705bf278" (UID: "96315b77-d775-4329-8a85-0fd2705bf278"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.917323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96315b77-d775-4329-8a85-0fd2705bf278" (UID: "96315b77-d775-4329-8a85-0fd2705bf278"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.921927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs" (OuterVolumeSpecName: "kube-api-access-wfvhs") pod "96315b77-d775-4329-8a85-0fd2705bf278" (UID: "96315b77-d775-4329-8a85-0fd2705bf278"). InnerVolumeSpecName "kube-api-access-wfvhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:44:13 crc kubenswrapper[4772]: I0124 03:44:13.930258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq778\" (UniqueName: \"kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778\") pod \"redhat-marketplace-lcjhk\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.006935 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96315b77-d775-4329-8a85-0fd2705bf278-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.006972 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfvhs\" (UniqueName: \"kubernetes.io/projected/96315b77-d775-4329-8a85-0fd2705bf278-kube-api-access-wfvhs\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.006984 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96315b77-d775-4329-8a85-0fd2705bf278-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.044531 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.083642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" event={"ID":"96315b77-d775-4329-8a85-0fd2705bf278","Type":"ContainerDied","Data":"87c13c7fe7a98d38e4cb6ee55cf4179b360fa15d120dbcf5b159a35174bcc33c"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.083654 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487090-mlcjv" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.083686 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c13c7fe7a98d38e4cb6ee55cf4179b360fa15d120dbcf5b159a35174bcc33c" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.084785 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.091101 4772 generic.go:334] "Generic (PLEG): container finished" podID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerID="203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90" exitCode=0 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.091318 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerDied","Data":"203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.103182 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.126425 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" event={"ID":"9b58435a-62c0-4129-a8d5-434a75e0f600","Type":"ContainerStarted","Data":"fd013c1266b873b081e9f4593f584bb12918cffde42c57706607b331d879acb6"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.126938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" event={"ID":"9b58435a-62c0-4129-a8d5-434a75e0f600","Type":"ContainerStarted","Data":"f2a425af17e5292a1342501901f312fa74c87f29783aabc992d1ebd457fa0d94"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.127006 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.143053 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:44:14 crc kubenswrapper[4772]: E0124 03:44:14.143406 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96315b77-d775-4329-8a85-0fd2705bf278" containerName="collect-profiles" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.143417 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="96315b77-d775-4329-8a85-0fd2705bf278" containerName="collect-profiles" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.143576 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="96315b77-d775-4329-8a85-0fd2705bf278" containerName="collect-profiles" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.144635 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.159060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6e87d689a7b13a8304855b55593476bb84b3fabca300ed2029b77096d0ff0a08"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.168670 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.209747 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea75126bbf076ca51e209a42d10e9af7047c456fc2d1f45c84e15bd8570fdee8"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.225338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88x4\" (UniqueName: \"kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.225434 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.225783 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.226501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ac70cd6a6cf2aa4f5c95d3df057004ca711d4ef87e7998e41ea4a5862a6a6428"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.226970 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.233641 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerID="344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35" exitCode=0 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.233724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerDied","Data":"344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.233779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerStarted","Data":"dd2ae1d757bbe5b1e27816c10ee80f51457966df06c60a8f01082f83650dbd1b"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.251973 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b","Type":"ContainerStarted","Data":"980e161d9f15779e8005b2dea523d2727502691fe66a32abbdfce913be3a674d"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.263127 4772 generic.go:334] "Generic (PLEG): container finished" podID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerID="3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6" exitCode=0 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.263204 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerDied","Data":"3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.273037 4772 generic.go:334] "Generic (PLEG): container finished" podID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerID="4d233950f01bf0b87b0b5f648ff1b76047f6220b85410540bdb0e070b4d1b4c2" exitCode=0 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.274233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerDied","Data":"4d233950f01bf0b87b0b5f648ff1b76047f6220b85410540bdb0e070b4d1b4c2"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.274270 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerStarted","Data":"0e2085096f658aba326d3b80086dfe35ad5fd002b8af57714d3cb79a316fe98f"} Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.277034 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" podStartSLOduration=130.27701099 podStartE2EDuration="2m10.27701099s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:44:14.229495257 +0000 UTC m=+151.266585982" watchObservedRunningTime="2026-01-24 03:44:14.27701099 +0000 UTC m=+151.314101715" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.327589 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88x4\" (UniqueName: \"kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.327687 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.327729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.329676 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.330131 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.361263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88x4\" (UniqueName: \"kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4\") pod \"redhat-marketplace-j67b7\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.484173 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:14 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:14 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:14 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.484525 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.512354 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:44:14 crc kubenswrapper[4772]: W0124 03:44:14.521646 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1276fcbc_1783_4f3a_8ee0_8be45b19d4b1.slice/crio-ca2029868590da0fa00e6e773eaac728b8f35ace003ba91a399ab429aea39450 WatchSource:0}: Error finding container ca2029868590da0fa00e6e773eaac728b8f35ace003ba91a399ab429aea39450: Status 404 returned error can't find the container with id ca2029868590da0fa00e6e773eaac728b8f35ace003ba91a399ab429aea39450 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.529080 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.734848 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:44:14 crc kubenswrapper[4772]: W0124 03:44:14.754704 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83971acc_e6fb_4ff2_b45b_7f0dda461036.slice/crio-7ee38e727bb8b70ad934437c425f42452e0a865df37c5014e0c11dde908ea852 WatchSource:0}: Error finding container 7ee38e727bb8b70ad934437c425f42452e0a865df37c5014e0c11dde908ea852: Status 404 returned error can't find the container with id 7ee38e727bb8b70ad934437c425f42452e0a865df37c5014e0c11dde908ea852 Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.850025 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.850717 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.853574 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.856020 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.864425 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.925227 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.926187 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.928722 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.935066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.935223 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.940196 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.956101 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.956253 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.957714 4772 patch_prober.go:28] interesting pod/console-f9d7485db-sflxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 24 03:44:14 crc kubenswrapper[4772]: I0124 03:44:14.957791 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sflxf" podUID="c54dc1be-1a2d-433d-bb84-a274bdd4365b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036363 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54z9x\" (UniqueName: \"kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036399 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036455 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.036982 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.059321 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.137635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.137701 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54z9x\" (UniqueName: \"kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.137729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.138821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.138805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.161760 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54z9x\" (UniqueName: \"kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x\") pod \"redhat-operators-bkqlj\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.164117 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.164155 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.170697 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.177442 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.223319 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-86pjh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.223899 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-86pjh" podUID="fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.223425 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-86pjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.223977 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-86pjh" podUID="fd6fbfcb-c0c8-4e58-9a17-9d8cf54eb610" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.238232 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.289915 4772 generic.go:334] "Generic (PLEG): container finished" podID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerID="d61abdb167ed3c6568a5c5e6cb56d3da84ef64155eaf5c8b5d76c5d0a3e7ee3d" exitCode=0 Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.289975 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerDied","Data":"d61abdb167ed3c6568a5c5e6cb56d3da84ef64155eaf5c8b5d76c5d0a3e7ee3d"} Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.290022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerStarted","Data":"7ee38e727bb8b70ad934437c425f42452e0a865df37c5014e0c11dde908ea852"} Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.296288 4772 generic.go:334] "Generic (PLEG): container finished" podID="a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" containerID="6dae68fac0caeb13b0abd882492b7b1489c0100698424d0fb720bc72ba5f943d" exitCode=0 Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.296377 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b","Type":"ContainerDied","Data":"6dae68fac0caeb13b0abd882492b7b1489c0100698424d0fb720bc72ba5f943d"} Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.302725 4772 generic.go:334] "Generic (PLEG): container finished" podID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerID="8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56" exitCode=0 Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.302772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerDied","Data":"8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56"} Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.302812 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerStarted","Data":"ca2029868590da0fa00e6e773eaac728b8f35ace003ba91a399ab429aea39450"} Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.310519 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g9ckr" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.324924 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.331759 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.347496 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.441350 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b72h\" (UniqueName: \"kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.441430 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.441471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.480331 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.481870 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.481910 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.489009 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:15 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:15 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:15 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.489400 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.520931 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.542249 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b72h\" (UniqueName: \"kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.542355 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.542533 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.544780 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.545266 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.565377 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b72h\" (UniqueName: \"kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h\") pod \"redhat-operators-t8nr2\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.598888 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.650005 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:44:15 crc kubenswrapper[4772]: I0124 03:44:15.661178 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:44:15 crc kubenswrapper[4772]: W0124 03:44:15.663903 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022f55bb_f179_48cc_ae69_a9936070e3b7.slice/crio-c811f2fd6c738b78c40d63171e08bf09a181ffd4b96e773519d5578373200693 WatchSource:0}: Error finding container c811f2fd6c738b78c40d63171e08bf09a181ffd4b96e773519d5578373200693: Status 404 returned error can't find the container with id c811f2fd6c738b78c40d63171e08bf09a181ffd4b96e773519d5578373200693 Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.010406 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:44:16 crc kubenswrapper[4772]: W0124 03:44:16.028508 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd124ff24_991c_4a60_997d_b899e8387e0d.slice/crio-b3d08b8bbaff4daad6e55792a58e9de90acbeb733c672d5ba6b9ff450c51ad8e WatchSource:0}: Error finding container b3d08b8bbaff4daad6e55792a58e9de90acbeb733c672d5ba6b9ff450c51ad8e: Status 404 returned error can't find the container with id b3d08b8bbaff4daad6e55792a58e9de90acbeb733c672d5ba6b9ff450c51ad8e Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.316402 4772 generic.go:334] "Generic (PLEG): container finished" podID="d124ff24-991c-4a60-997d-b899e8387e0d" containerID="2ccca00b96e00bfab784be64d8ada3e730b790ae23d882d5789359a36e15cf93" exitCode=0 Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.316790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerDied","Data":"2ccca00b96e00bfab784be64d8ada3e730b790ae23d882d5789359a36e15cf93"} Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.316818 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerStarted","Data":"b3d08b8bbaff4daad6e55792a58e9de90acbeb733c672d5ba6b9ff450c51ad8e"} Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.324368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"209e808b-a94d-475a-8e3d-a1fe46ec818d","Type":"ContainerStarted","Data":"033b12d4c0efb3d98571ea74e8ff9d03f2f24ff635905d81a7f8481a957a9e97"} Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.326080 4772 generic.go:334] "Generic (PLEG): container finished" podID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerID="d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7" exitCode=0 Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.327038 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerDied","Data":"d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7"} Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.327111 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerStarted","Data":"c811f2fd6c738b78c40d63171e08bf09a181ffd4b96e773519d5578373200693"} Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.348914 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-78vrm" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.484190 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:16 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:16 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:16 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.484257 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.791710 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.892566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir\") pod \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.892667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access\") pod \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\" (UID: \"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b\") " Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.892977 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" (UID: "a842be3a-30cd-4ee9-bfe6-e6caf8ee918b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.893348 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.904406 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.904494 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" (UID: "a842be3a-30cd-4ee9-bfe6-e6caf8ee918b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.904496 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:44:16 crc kubenswrapper[4772]: I0124 03:44:16.996390 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a842be3a-30cd-4ee9-bfe6-e6caf8ee918b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.272634 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mq527" Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.370609 4772 generic.go:334] "Generic (PLEG): container finished" podID="209e808b-a94d-475a-8e3d-a1fe46ec818d" containerID="01588eb8efac72d55e1e604949c6b446efbea7dd08886b1a2cf1628a6a33a09d" exitCode=0 Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.370685 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"209e808b-a94d-475a-8e3d-a1fe46ec818d","Type":"ContainerDied","Data":"01588eb8efac72d55e1e604949c6b446efbea7dd08886b1a2cf1628a6a33a09d"} Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.400759 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.403075 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a842be3a-30cd-4ee9-bfe6-e6caf8ee918b","Type":"ContainerDied","Data":"980e161d9f15779e8005b2dea523d2727502691fe66a32abbdfce913be3a674d"} Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.403116 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980e161d9f15779e8005b2dea523d2727502691fe66a32abbdfce913be3a674d" Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.492399 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:17 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:17 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:17 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:17 crc kubenswrapper[4772]: I0124 03:44:17.492504 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.484193 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:18 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:18 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:18 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.484281 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.826950 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.919992 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access\") pod \"209e808b-a94d-475a-8e3d-a1fe46ec818d\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.920130 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir\") pod \"209e808b-a94d-475a-8e3d-a1fe46ec818d\" (UID: \"209e808b-a94d-475a-8e3d-a1fe46ec818d\") " Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.920265 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "209e808b-a94d-475a-8e3d-a1fe46ec818d" (UID: "209e808b-a94d-475a-8e3d-a1fe46ec818d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.920568 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/209e808b-a94d-475a-8e3d-a1fe46ec818d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:18 crc kubenswrapper[4772]: I0124 03:44:18.944158 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "209e808b-a94d-475a-8e3d-a1fe46ec818d" (UID: "209e808b-a94d-475a-8e3d-a1fe46ec818d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.022028 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/209e808b-a94d-475a-8e3d-a1fe46ec818d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.484438 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:19 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:19 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:19 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.484516 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.774445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"209e808b-a94d-475a-8e3d-a1fe46ec818d","Type":"ContainerDied","Data":"033b12d4c0efb3d98571ea74e8ff9d03f2f24ff635905d81a7f8481a957a9e97"} Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.774492 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033b12d4c0efb3d98571ea74e8ff9d03f2f24ff635905d81a7f8481a957a9e97" Jan 24 03:44:19 crc kubenswrapper[4772]: I0124 03:44:19.774556 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.484001 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:20 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:20 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:20 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.484729 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.804721 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-c5k7g_f98b89d5-0baf-4892-8d9b-44e64a3d793b/cluster-samples-operator/0.log" Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.804784 4772 generic.go:334] "Generic (PLEG): container finished" podID="f98b89d5-0baf-4892-8d9b-44e64a3d793b" containerID="6111d98687ceccf51809ecc71c67f2932b31e10c8e3c210978a53d3c899a4d71" exitCode=2 Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.804815 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" event={"ID":"f98b89d5-0baf-4892-8d9b-44e64a3d793b","Type":"ContainerDied","Data":"6111d98687ceccf51809ecc71c67f2932b31e10c8e3c210978a53d3c899a4d71"} Jan 24 03:44:20 crc kubenswrapper[4772]: I0124 03:44:20.805279 4772 scope.go:117] "RemoveContainer" containerID="6111d98687ceccf51809ecc71c67f2932b31e10c8e3c210978a53d3c899a4d71" Jan 24 03:44:21 crc kubenswrapper[4772]: I0124 03:44:21.488371 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:21 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:21 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:21 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:21 crc kubenswrapper[4772]: I0124 03:44:21.488792 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:21 crc kubenswrapper[4772]: I0124 03:44:21.837501 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-c5k7g_f98b89d5-0baf-4892-8d9b-44e64a3d793b/cluster-samples-operator/0.log" Jan 24 03:44:21 crc kubenswrapper[4772]: I0124 03:44:21.837564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5k7g" event={"ID":"f98b89d5-0baf-4892-8d9b-44e64a3d793b","Type":"ContainerStarted","Data":"169b13bfbe59f85e36afd66cce04912cdd1ae4ad0a9ad0b2417668c6492eadfc"} Jan 24 03:44:22 crc kubenswrapper[4772]: I0124 03:44:22.482077 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:22 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:22 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:22 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:22 crc kubenswrapper[4772]: I0124 03:44:22.482134 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:23 crc kubenswrapper[4772]: I0124 03:44:23.482676 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:23 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:23 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:23 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:23 crc kubenswrapper[4772]: I0124 03:44:23.483293 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:24 crc kubenswrapper[4772]: I0124 03:44:24.482082 4772 patch_prober.go:28] interesting pod/router-default-5444994796-ktgts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 03:44:24 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Jan 24 03:44:24 crc kubenswrapper[4772]: [+]process-running ok Jan 24 03:44:24 crc kubenswrapper[4772]: healthz check failed Jan 24 03:44:24 crc kubenswrapper[4772]: I0124 03:44:24.482156 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktgts" podUID="d7498c90-00fc-4024-8509-c135ff6ce906" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 03:44:24 crc kubenswrapper[4772]: I0124 03:44:24.956837 4772 patch_prober.go:28] interesting pod/console-f9d7485db-sflxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 24 03:44:24 crc kubenswrapper[4772]: I0124 03:44:24.957433 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sflxf" podUID="c54dc1be-1a2d-433d-bb84-a274bdd4365b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 24 03:44:25 crc kubenswrapper[4772]: I0124 03:44:25.234553 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-86pjh" Jan 24 03:44:25 crc kubenswrapper[4772]: I0124 03:44:25.520017 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:25 crc kubenswrapper[4772]: I0124 03:44:25.523702 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ktgts" Jan 24 03:44:27 crc kubenswrapper[4772]: I0124 03:44:27.037205 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:44:27 crc kubenswrapper[4772]: I0124 03:44:27.045388 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8311b11-97fe-4657-add7-66fd66adc69f-metrics-certs\") pod \"network-metrics-daemon-mpdb8\" (UID: \"e8311b11-97fe-4657-add7-66fd66adc69f\") " pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:44:27 crc kubenswrapper[4772]: I0124 03:44:27.214647 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mpdb8" Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.129811 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.130021 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" containerID="cri-o://67627cdc5086f26b1a8ffaefd1f8a06728f4f46dbf0ecbee8b6df77b810c77ff" gracePeriod=30 Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.144796 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.145028 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerName="route-controller-manager" containerID="cri-o://87fde037a411221e3bffa919d3324433a80d5abf42a87c0fad7e8b375f5c1c9e" gracePeriod=30 Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.917245 4772 generic.go:334] "Generic (PLEG): container finished" podID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerID="87fde037a411221e3bffa919d3324433a80d5abf42a87c0fad7e8b375f5c1c9e" exitCode=0 Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.917344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" event={"ID":"fbf79dca-3857-4554-9b1a-8b98d98c88ad","Type":"ContainerDied","Data":"87fde037a411221e3bffa919d3324433a80d5abf42a87c0fad7e8b375f5c1c9e"} Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.921136 4772 generic.go:334] "Generic (PLEG): container finished" podID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerID="67627cdc5086f26b1a8ffaefd1f8a06728f4f46dbf0ecbee8b6df77b810c77ff" exitCode=0 Jan 24 03:44:28 crc kubenswrapper[4772]: I0124 03:44:28.921185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" event={"ID":"6b82cfc0-bf71-4a95-8797-f92690f9a2b0","Type":"ContainerDied","Data":"67627cdc5086f26b1a8ffaefd1f8a06728f4f46dbf0ecbee8b6df77b810c77ff"} Jan 24 03:44:32 crc kubenswrapper[4772]: I0124 03:44:32.506658 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:44:34 crc kubenswrapper[4772]: I0124 03:44:34.960640 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:35 crc kubenswrapper[4772]: I0124 03:44:35.314410 4772 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z98q4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 24 03:44:35 crc kubenswrapper[4772]: I0124 03:44:35.314464 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 24 03:44:35 crc kubenswrapper[4772]: I0124 03:44:35.319692 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sflxf" Jan 24 03:44:35 crc kubenswrapper[4772]: I0124 03:44:35.674426 4772 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mshdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 24 03:44:35 crc kubenswrapper[4772]: I0124 03:44:35.675358 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.482075 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.491331 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.517593 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:44:43 crc kubenswrapper[4772]: E0124 03:44:43.517868 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.517885 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: E0124 03:44:43.517909 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerName="route-controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.517916 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerName="route-controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: E0124 03:44:43.517925 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.517933 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: E0124 03:44:43.517942 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209e808b-a94d-475a-8e3d-a1fe46ec818d" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.517948 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="209e808b-a94d-475a-8e3d-a1fe46ec818d" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.518071 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" containerName="controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.518084 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="209e808b-a94d-475a-8e3d-a1fe46ec818d" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.518092 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" containerName="route-controller-manager" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.518102 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a842be3a-30cd-4ee9-bfe6-e6caf8ee918b" containerName="pruner" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.518444 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.523790 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config\") pod \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657373 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9998l\" (UniqueName: \"kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l\") pod \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657453 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert\") pod \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657510 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca\") pod \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert\") pod \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657603 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca\") pod \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657628 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config\") pod \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\" (UID: \"fbf79dca-3857-4554-9b1a-8b98d98c88ad\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657654 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles\") pod \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657680 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwm2c\" (UniqueName: \"kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c\") pod \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\" (UID: \"6b82cfc0-bf71-4a95-8797-f92690f9a2b0\") " Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657897 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657952 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmxl\" (UniqueName: \"kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.657993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.658015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config" (OuterVolumeSpecName: "config") pod "6b82cfc0-bf71-4a95-8797-f92690f9a2b0" (UID: "6b82cfc0-bf71-4a95-8797-f92690f9a2b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.658032 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.658099 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.658150 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.659259 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6b82cfc0-bf71-4a95-8797-f92690f9a2b0" (UID: "6b82cfc0-bf71-4a95-8797-f92690f9a2b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.659669 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "6b82cfc0-bf71-4a95-8797-f92690f9a2b0" (UID: "6b82cfc0-bf71-4a95-8797-f92690f9a2b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.659946 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "fbf79dca-3857-4554-9b1a-8b98d98c88ad" (UID: "fbf79dca-3857-4554-9b1a-8b98d98c88ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.660148 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config" (OuterVolumeSpecName: "config") pod "fbf79dca-3857-4554-9b1a-8b98d98c88ad" (UID: "fbf79dca-3857-4554-9b1a-8b98d98c88ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.662596 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fbf79dca-3857-4554-9b1a-8b98d98c88ad" (UID: "fbf79dca-3857-4554-9b1a-8b98d98c88ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.666488 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c" (OuterVolumeSpecName: "kube-api-access-zwm2c") pod "6b82cfc0-bf71-4a95-8797-f92690f9a2b0" (UID: "6b82cfc0-bf71-4a95-8797-f92690f9a2b0"). InnerVolumeSpecName "kube-api-access-zwm2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.669148 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l" (OuterVolumeSpecName: "kube-api-access-9998l") pod "fbf79dca-3857-4554-9b1a-8b98d98c88ad" (UID: "fbf79dca-3857-4554-9b1a-8b98d98c88ad"). InnerVolumeSpecName "kube-api-access-9998l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.673003 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6b82cfc0-bf71-4a95-8797-f92690f9a2b0" (UID: "6b82cfc0-bf71-4a95-8797-f92690f9a2b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759039 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svmxl\" (UniqueName: \"kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759922 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759985 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.759997 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbf79dca-3857-4554-9b1a-8b98d98c88ad-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760006 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760018 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwm2c\" (UniqueName: \"kubernetes.io/projected/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-kube-api-access-zwm2c\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760026 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9998l\" (UniqueName: \"kubernetes.io/projected/fbf79dca-3857-4554-9b1a-8b98d98c88ad-kube-api-access-9998l\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760035 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbf79dca-3857-4554-9b1a-8b98d98c88ad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760044 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760052 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b82cfc0-bf71-4a95-8797-f92690f9a2b0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.760616 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.761298 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.761664 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.763802 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.778797 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svmxl\" (UniqueName: \"kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl\") pod \"controller-manager-7f7f7fd78f-wr25p\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:43 crc kubenswrapper[4772]: I0124 03:44:43.846759 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.027652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" event={"ID":"6b82cfc0-bf71-4a95-8797-f92690f9a2b0","Type":"ContainerDied","Data":"19d5b2453331cfbc16ed008a76543c71efb9568fe88076c1b1f25fe0d39bf751"} Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.027702 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mshdh" Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.027752 4772 scope.go:117] "RemoveContainer" containerID="67627cdc5086f26b1a8ffaefd1f8a06728f4f46dbf0ecbee8b6df77b810c77ff" Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.032112 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" event={"ID":"fbf79dca-3857-4554-9b1a-8b98d98c88ad","Type":"ContainerDied","Data":"d73994ac0c90f015198de54cf143b8582c6b24bfbe1aea156d7034974f9bf11b"} Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.032167 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4" Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.052278 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.060533 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mshdh"] Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.064360 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:44 crc kubenswrapper[4772]: I0124 03:44:44.067232 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z98q4"] Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.061696 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hd4qv" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.627805 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.628763 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.630977 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.631904 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.633235 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.633321 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.633481 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.633711 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.637989 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.668127 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b82cfc0-bf71-4a95-8797-f92690f9a2b0" path="/var/lib/kubelet/pods/6b82cfc0-bf71-4a95-8797-f92690f9a2b0/volumes" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.668808 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf79dca-3857-4554-9b1a-8b98d98c88ad" path="/var/lib/kubelet/pods/fbf79dca-3857-4554-9b1a-8b98d98c88ad/volumes" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.782250 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.782418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.782547 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.782690 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq545\" (UniqueName: \"kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.883489 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.883606 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.883637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.883673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq545\" (UniqueName: \"kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.884571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.885233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.891944 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.900278 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq545\" (UniqueName: \"kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545\") pod \"route-controller-manager-5687cfbc7f-sfww6\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:45 crc kubenswrapper[4772]: I0124 03:44:45.949264 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:44:46 crc kubenswrapper[4772]: I0124 03:44:46.900249 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:44:46 crc kubenswrapper[4772]: I0124 03:44:46.900318 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:44:48 crc kubenswrapper[4772]: I0124 03:44:48.062799 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:44:48 crc kubenswrapper[4772]: I0124 03:44:48.158806 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:44:50 crc kubenswrapper[4772]: E0124 03:44:50.026974 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 03:44:50 crc kubenswrapper[4772]: E0124 03:44:50.027788 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqdvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fqbcg_openshift-marketplace(0b681c8c-16cd-49d7-b3ac-facfe4238b0d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:50 crc kubenswrapper[4772]: E0124 03:44:50.029312 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fqbcg" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" Jan 24 03:44:51 crc kubenswrapper[4772]: E0124 03:44:51.164085 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fqbcg" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" Jan 24 03:44:51 crc kubenswrapper[4772]: E0124 03:44:51.223253 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 03:44:51 crc kubenswrapper[4772]: E0124 03:44:51.223408 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k88x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j67b7_openshift-marketplace(83971acc-e6fb-4ff2-b45b-7f0dda461036): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:51 crc kubenswrapper[4772]: E0124 03:44:51.224716 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-j67b7" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" Jan 24 03:44:51 crc kubenswrapper[4772]: I0124 03:44:51.633615 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.853126 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.857533 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.858125 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.860936 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.861863 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.980605 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:52 crc kubenswrapper[4772]: I0124 03:44:52.980686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:53 crc kubenswrapper[4772]: I0124 03:44:53.081827 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:53 crc kubenswrapper[4772]: I0124 03:44:53.081892 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:53 crc kubenswrapper[4772]: I0124 03:44:53.081969 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:53 crc kubenswrapper[4772]: I0124 03:44:53.099362 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:53 crc kubenswrapper[4772]: I0124 03:44:53.178882 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.351249 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-j67b7" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.463841 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.463987 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54z9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bkqlj_openshift-marketplace(022f55bb-f179-48cc-ae69-a9936070e3b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.465812 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bkqlj" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.488661 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.488833 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjs8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jgctw_openshift-marketplace(f65f1015-89a0-482e-87d3-f2b2e2149e2d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:56 crc kubenswrapper[4772]: E0124 03:44:56.489982 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jgctw" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.040355 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bkqlj" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.040352 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jgctw" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.069939 4772 scope.go:117] "RemoveContainer" containerID="87fde037a411221e3bffa919d3324433a80d5abf42a87c0fad7e8b375f5c1c9e" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.111180 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.111340 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hwwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j67dm_openshift-marketplace(6cd7a1a3-2773-4ffc-9cef-8015556b3b33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.114539 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j67dm" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.145336 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.145500 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t8nr2_openshift-marketplace(d124ff24-991c-4a60-997d-b899e8387e0d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.146708 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t8nr2" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.157243 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.157404 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zq778,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lcjhk_openshift-marketplace(1276fcbc-1783-4f3a-8ee0-8be45b19d4b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.158065 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.158243 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkf7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-prjh9_openshift-marketplace(621244af-16f4-4b04-aa0c-5c71a7d49eb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.159149 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lcjhk" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" Jan 24 03:44:58 crc kubenswrapper[4772]: E0124 03:44:58.159486 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-prjh9" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.247120 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.249301 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.251647 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.351521 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.351572 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.351598 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.452375 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.452495 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.452510 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.452635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.452690 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.463505 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:44:58 crc kubenswrapper[4772]: W0124 03:44:58.479265 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5cc9ead_7e85_4d0b_b686_a5fc4eec7b24.slice/crio-03136664f23fc1ca5f0862f3f69ffb6e768326eae4720f05a7f9d6c3d621920f WatchSource:0}: Error finding container 03136664f23fc1ca5f0862f3f69ffb6e768326eae4720f05a7f9d6c3d621920f: Status 404 returned error can't find the container with id 03136664f23fc1ca5f0862f3f69ffb6e768326eae4720f05a7f9d6c3d621920f Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.486528 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access\") pod \"installer-9-crc\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.590121 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.590921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mpdb8"] Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.594221 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.597629 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:44:58 crc kubenswrapper[4772]: W0124 03:44:58.613522 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8311b11_97fe_4657_add7_66fd66adc69f.slice/crio-962738b880183027260c248b9167961425585c4ae70153c2975c322a1fb70ad3 WatchSource:0}: Error finding container 962738b880183027260c248b9167961425585c4ae70153c2975c322a1fb70ad3: Status 404 returned error can't find the container with id 962738b880183027260c248b9167961425585c4ae70153c2975c322a1fb70ad3 Jan 24 03:44:58 crc kubenswrapper[4772]: I0124 03:44:58.779447 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 03:44:59 crc kubenswrapper[4772]: W0124 03:44:59.000098 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod395b8998_d655_4164_b6ae_ba0fc8bd4434.slice/crio-2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6 WatchSource:0}: Error finding container 2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6: Status 404 returned error can't find the container with id 2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6 Jan 24 03:44:59 crc kubenswrapper[4772]: I0124 03:44:59.133472 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" event={"ID":"8be43078-9419-455d-ba8f-cc706c6fa8eb","Type":"ContainerStarted","Data":"4f6bec1c54fb4663e8eac7ea768c5c79e3c54447d4787f25b864c6889fe2d092"} Jan 24 03:44:59 crc kubenswrapper[4772]: I0124 03:44:59.137021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"395b8998-d655-4164-b6ae-ba0fc8bd4434","Type":"ContainerStarted","Data":"2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6"} Jan 24 03:44:59 crc kubenswrapper[4772]: I0124 03:44:59.138233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" event={"ID":"e8311b11-97fe-4657-add7-66fd66adc69f","Type":"ContainerStarted","Data":"962738b880183027260c248b9167961425585c4ae70153c2975c322a1fb70ad3"} Jan 24 03:44:59 crc kubenswrapper[4772]: I0124 03:44:59.139335 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" event={"ID":"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24","Type":"ContainerStarted","Data":"03136664f23fc1ca5f0862f3f69ffb6e768326eae4720f05a7f9d6c3d621920f"} Jan 24 03:44:59 crc kubenswrapper[4772]: I0124 03:44:59.140965 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e5ec8b4b-b1be-435e-966c-1d76de197426","Type":"ContainerStarted","Data":"d4ac24bca8d3e103146a430e6ce6f87b4cee4641bbb4f70a54b20c6d6339c923"} Jan 24 03:44:59 crc kubenswrapper[4772]: E0124 03:44:59.141978 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lcjhk" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" Jan 24 03:44:59 crc kubenswrapper[4772]: E0124 03:44:59.142051 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j67dm" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" Jan 24 03:44:59 crc kubenswrapper[4772]: E0124 03:44:59.142438 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-prjh9" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" Jan 24 03:44:59 crc kubenswrapper[4772]: E0124 03:44:59.145385 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t8nr2" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.136647 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm"] Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.138092 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.139488 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.140087 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.151105 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" event={"ID":"e8311b11-97fe-4657-add7-66fd66adc69f","Type":"ContainerStarted","Data":"3c2cf04e5380aaf2951b69d4998a5bab53116e456c4049071f1612967b41fb6d"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.151163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mpdb8" event={"ID":"e8311b11-97fe-4657-add7-66fd66adc69f","Type":"ContainerStarted","Data":"5d7e838c3cf3beca46fcdf067387fd4fd2dc8b2fa17276378ae2d733d7139282"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.153415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" event={"ID":"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24","Type":"ContainerStarted","Data":"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.153476 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" podUID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" containerName="route-controller-manager" containerID="cri-o://58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98" gracePeriod=30 Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.153604 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.154938 4772 generic.go:334] "Generic (PLEG): container finished" podID="e5ec8b4b-b1be-435e-966c-1d76de197426" containerID="708fc5947c9af4eaccc682d9be24e66981f18929afa1697d35113008c7670c7a" exitCode=0 Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.155060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e5ec8b4b-b1be-435e-966c-1d76de197426","Type":"ContainerDied","Data":"708fc5947c9af4eaccc682d9be24e66981f18929afa1697d35113008c7670c7a"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.157928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" event={"ID":"8be43078-9419-455d-ba8f-cc706c6fa8eb","Type":"ContainerStarted","Data":"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.158020 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerName="controller-manager" containerID="cri-o://f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95" gracePeriod=30 Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.158162 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.159089 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"395b8998-d655-4164-b6ae-ba0fc8bd4434","Type":"ContainerStarted","Data":"84325b4a25e5cbb18ad7d62436052525ae290e8a64b8ef8f75a5cd7daa26a374"} Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.159153 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm"] Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.162852 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.198448 4772 patch_prober.go:28] interesting pod/controller-manager-7f7f7fd78f-wr25p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:54898->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.198520 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:54898->10.217.0.54:8443: read: connection reset by peer" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.206075 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" podStartSLOduration=32.206057196 podStartE2EDuration="32.206057196s" podCreationTimestamp="2026-01-24 03:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:00.201942231 +0000 UTC m=+197.239032966" watchObservedRunningTime="2026-01-24 03:45:00.206057196 +0000 UTC m=+197.243147921" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.224655 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" podStartSLOduration=32.224638088 podStartE2EDuration="32.224638088s" podCreationTimestamp="2026-01-24 03:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:00.222056736 +0000 UTC m=+197.259147481" watchObservedRunningTime="2026-01-24 03:45:00.224638088 +0000 UTC m=+197.261728813" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.271094 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.271074812 podStartE2EDuration="2.271074812s" podCreationTimestamp="2026-01-24 03:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:00.269690664 +0000 UTC m=+197.306781399" watchObservedRunningTime="2026-01-24 03:45:00.271074812 +0000 UTC m=+197.308165547" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.285430 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7ts\" (UniqueName: \"kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.285502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.285576 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.293186 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mpdb8" podStartSLOduration=176.293169113 podStartE2EDuration="2m56.293169113s" podCreationTimestamp="2026-01-24 03:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:00.291530217 +0000 UTC m=+197.328620942" watchObservedRunningTime="2026-01-24 03:45:00.293169113 +0000 UTC m=+197.330259838" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.386350 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr7ts\" (UniqueName: \"kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.386413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.386473 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.387384 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.407829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr7ts\" (UniqueName: \"kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.413160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume\") pod \"collect-profiles-29487105-jltsm\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.455279 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.551778 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.571376 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.600103 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:00 crc kubenswrapper[4772]: E0124 03:45:00.601065 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" containerName="route-controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.601295 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" containerName="route-controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: E0124 03:45:00.601326 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerName="controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.601337 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerName="controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.601476 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerName="controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.601494 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" containerName="route-controller-manager" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.601961 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.612068 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.692293 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca\") pod \"8be43078-9419-455d-ba8f-cc706c6fa8eb\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.692368 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config\") pod \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693784 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert\") pod \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693825 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert\") pod \"8be43078-9419-455d-ba8f-cc706c6fa8eb\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693875 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca\") pod \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693962 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config\") pod \"8be43078-9419-455d-ba8f-cc706c6fa8eb\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694003 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles\") pod \"8be43078-9419-455d-ba8f-cc706c6fa8eb\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq545\" (UniqueName: \"kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545\") pod \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\" (UID: \"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694061 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svmxl\" (UniqueName: \"kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl\") pod \"8be43078-9419-455d-ba8f-cc706c6fa8eb\" (UID: \"8be43078-9419-455d-ba8f-cc706c6fa8eb\") " Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694253 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694344 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hq2\" (UniqueName: \"kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.694366 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "8be43078-9419-455d-ba8f-cc706c6fa8eb" (UID: "8be43078-9419-455d-ba8f-cc706c6fa8eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.693645 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config" (OuterVolumeSpecName: "config") pod "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" (UID: "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.695878 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config" (OuterVolumeSpecName: "config") pod "8be43078-9419-455d-ba8f-cc706c6fa8eb" (UID: "8be43078-9419-455d-ba8f-cc706c6fa8eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.695950 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8be43078-9419-455d-ba8f-cc706c6fa8eb" (UID: "8be43078-9419-455d-ba8f-cc706c6fa8eb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.696274 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" (UID: "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.700884 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8be43078-9419-455d-ba8f-cc706c6fa8eb" (UID: "8be43078-9419-455d-ba8f-cc706c6fa8eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.700930 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545" (OuterVolumeSpecName: "kube-api-access-xq545") pod "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" (UID: "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24"). InnerVolumeSpecName "kube-api-access-xq545". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.700903 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl" (OuterVolumeSpecName: "kube-api-access-svmxl") pod "8be43078-9419-455d-ba8f-cc706c6fa8eb" (UID: "8be43078-9419-455d-ba8f-cc706c6fa8eb"). InnerVolumeSpecName "kube-api-access-svmxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.701471 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" (UID: "e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.795948 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.796082 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.796201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hq2\" (UniqueName: \"kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.796243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798068 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798129 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798167 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq545\" (UniqueName: \"kubernetes.io/projected/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-kube-api-access-xq545\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798198 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svmxl\" (UniqueName: \"kubernetes.io/projected/8be43078-9419-455d-ba8f-cc706c6fa8eb-kube-api-access-svmxl\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798233 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be43078-9419-455d-ba8f-cc706c6fa8eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798266 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798293 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798323 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be43078-9419-455d-ba8f-cc706c6fa8eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798350 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798622 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.798731 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.800906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.813778 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hq2\" (UniqueName: \"kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2\") pod \"route-controller-manager-7cf97b9c8b-mvmbh\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.895363 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm"] Jan 24 03:45:00 crc kubenswrapper[4772]: W0124 03:45:00.904319 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb686536e_83b5_45a3_8266_2a6f34b84c1c.slice/crio-ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e WatchSource:0}: Error finding container ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e: Status 404 returned error can't find the container with id ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e Jan 24 03:45:00 crc kubenswrapper[4772]: I0124 03:45:00.931530 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.150539 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.167998 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" event={"ID":"d4c18fca-913e-416b-bd42-9b0bc69569a0","Type":"ContainerStarted","Data":"1f938fd10fae2f87b4765accf670b53bd171dfe4be7cd93d0bf14bdd6406f534"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.172952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" event={"ID":"b686536e-83b5-45a3-8266-2a6f34b84c1c","Type":"ContainerStarted","Data":"fb9448ce4d84cb5cd68eb1e9866f7686021c687f8e73a48ad5c19d2826a5c87b"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.173019 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" event={"ID":"b686536e-83b5-45a3-8266-2a6f34b84c1c","Type":"ContainerStarted","Data":"ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.175335 4772 generic.go:334] "Generic (PLEG): container finished" podID="8be43078-9419-455d-ba8f-cc706c6fa8eb" containerID="f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95" exitCode=0 Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.175430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" event={"ID":"8be43078-9419-455d-ba8f-cc706c6fa8eb","Type":"ContainerDied","Data":"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.175517 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" event={"ID":"8be43078-9419-455d-ba8f-cc706c6fa8eb","Type":"ContainerDied","Data":"4f6bec1c54fb4663e8eac7ea768c5c79e3c54447d4787f25b864c6889fe2d092"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.175545 4772 scope.go:117] "RemoveContainer" containerID="f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.175660 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.177780 4772 generic.go:334] "Generic (PLEG): container finished" podID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" containerID="58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98" exitCode=0 Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.177913 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" event={"ID":"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24","Type":"ContainerDied","Data":"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.177933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" event={"ID":"e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24","Type":"ContainerDied","Data":"03136664f23fc1ca5f0862f3f69ffb6e768326eae4720f05a7f9d6c3d621920f"} Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.177935 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.190294 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" podStartSLOduration=1.190273791 podStartE2EDuration="1.190273791s" podCreationTimestamp="2026-01-24 03:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:01.187778391 +0000 UTC m=+198.224869136" watchObservedRunningTime="2026-01-24 03:45:01.190273791 +0000 UTC m=+198.227364506" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.200001 4772 scope.go:117] "RemoveContainer" containerID="f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95" Jan 24 03:45:01 crc kubenswrapper[4772]: E0124 03:45:01.201126 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95\": container with ID starting with f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95 not found: ID does not exist" containerID="f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.201237 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95"} err="failed to get container status \"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95\": rpc error: code = NotFound desc = could not find container \"f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95\": container with ID starting with f2058bbeb493a6fb97499f92e7f68440d9eecede6c182fbeb7d751081f946f95 not found: ID does not exist" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.201465 4772 scope.go:117] "RemoveContainer" containerID="58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.215184 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.218027 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-sfww6"] Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.232919 4772 scope.go:117] "RemoveContainer" containerID="58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98" Jan 24 03:45:01 crc kubenswrapper[4772]: E0124 03:45:01.233456 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98\": container with ID starting with 58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98 not found: ID does not exist" containerID="58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.233531 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98"} err="failed to get container status \"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98\": rpc error: code = NotFound desc = could not find container \"58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98\": container with ID starting with 58c6c1ba7a43333844aa60a4de06794a94a3b45e437159e3c4f107e0b6b64d98 not found: ID does not exist" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.300602 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.305369 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f7f7fd78f-wr25p"] Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.426345 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.510698 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir\") pod \"e5ec8b4b-b1be-435e-966c-1d76de197426\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.510796 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access\") pod \"e5ec8b4b-b1be-435e-966c-1d76de197426\" (UID: \"e5ec8b4b-b1be-435e-966c-1d76de197426\") " Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.510893 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e5ec8b4b-b1be-435e-966c-1d76de197426" (UID: "e5ec8b4b-b1be-435e-966c-1d76de197426"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.511095 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec8b4b-b1be-435e-966c-1d76de197426-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.516248 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e5ec8b4b-b1be-435e-966c-1d76de197426" (UID: "e5ec8b4b-b1be-435e-966c-1d76de197426"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.612857 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5ec8b4b-b1be-435e-966c-1d76de197426-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.668634 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be43078-9419-455d-ba8f-cc706c6fa8eb" path="/var/lib/kubelet/pods/8be43078-9419-455d-ba8f-cc706c6fa8eb/volumes" Jan 24 03:45:01 crc kubenswrapper[4772]: I0124 03:45:01.669587 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24" path="/var/lib/kubelet/pods/e5cc9ead-7e85-4d0b-b686-a5fc4eec7b24/volumes" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.186060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" event={"ID":"d4c18fca-913e-416b-bd42-9b0bc69569a0","Type":"ContainerStarted","Data":"3867b166dff55f03b4ac50c91e57108e1831a2710ade07b2052535bca4699dd5"} Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.186522 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.188283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e5ec8b4b-b1be-435e-966c-1d76de197426","Type":"ContainerDied","Data":"d4ac24bca8d3e103146a430e6ce6f87b4cee4641bbb4f70a54b20c6d6339c923"} Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.188308 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ac24bca8d3e103146a430e6ce6f87b4cee4641bbb4f70a54b20c6d6339c923" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.188365 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.192122 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.194349 4772 generic.go:334] "Generic (PLEG): container finished" podID="b686536e-83b5-45a3-8266-2a6f34b84c1c" containerID="fb9448ce4d84cb5cd68eb1e9866f7686021c687f8e73a48ad5c19d2826a5c87b" exitCode=0 Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.194432 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" event={"ID":"b686536e-83b5-45a3-8266-2a6f34b84c1c","Type":"ContainerDied","Data":"fb9448ce4d84cb5cd68eb1e9866f7686021c687f8e73a48ad5c19d2826a5c87b"} Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.202393 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" podStartSLOduration=14.202373099 podStartE2EDuration="14.202373099s" podCreationTimestamp="2026-01-24 03:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:02.199130187 +0000 UTC m=+199.236220912" watchObservedRunningTime="2026-01-24 03:45:02.202373099 +0000 UTC m=+199.239463824" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.636057 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:02 crc kubenswrapper[4772]: E0124 03:45:02.636290 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ec8b4b-b1be-435e-966c-1d76de197426" containerName="pruner" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.636302 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ec8b4b-b1be-435e-966c-1d76de197426" containerName="pruner" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.636406 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ec8b4b-b1be-435e-966c-1d76de197426" containerName="pruner" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.636762 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.638538 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.638880 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.639494 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.639691 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.639887 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.639991 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.647145 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.653478 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.730511 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54b4n\" (UniqueName: \"kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.730589 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.730617 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.730644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.730747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.832900 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54b4n\" (UniqueName: \"kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.833011 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.833045 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.833081 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.833104 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.834107 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.834329 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.836139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.842584 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.852260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54b4n\" (UniqueName: \"kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n\") pod \"controller-manager-5f596b4b9d-tdqvn\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:02 crc kubenswrapper[4772]: I0124 03:45:02.967792 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.156949 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:03 crc kubenswrapper[4772]: W0124 03:45:03.169181 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9000a239_6b89_4eee_9527_0701538cf737.slice/crio-85d2a78216eb05f316f3adf144ca574504408d8c0baf13d5f81216862d1db79e WatchSource:0}: Error finding container 85d2a78216eb05f316f3adf144ca574504408d8c0baf13d5f81216862d1db79e: Status 404 returned error can't find the container with id 85d2a78216eb05f316f3adf144ca574504408d8c0baf13d5f81216862d1db79e Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.205366 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" event={"ID":"9000a239-6b89-4eee-9527-0701538cf737","Type":"ContainerStarted","Data":"85d2a78216eb05f316f3adf144ca574504408d8c0baf13d5f81216862d1db79e"} Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.207581 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerID="d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9" exitCode=0 Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.207671 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerDied","Data":"d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9"} Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.415982 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.543041 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume\") pod \"b686536e-83b5-45a3-8266-2a6f34b84c1c\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.543186 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr7ts\" (UniqueName: \"kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts\") pod \"b686536e-83b5-45a3-8266-2a6f34b84c1c\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.543251 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume\") pod \"b686536e-83b5-45a3-8266-2a6f34b84c1c\" (UID: \"b686536e-83b5-45a3-8266-2a6f34b84c1c\") " Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.543831 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b686536e-83b5-45a3-8266-2a6f34b84c1c" (UID: "b686536e-83b5-45a3-8266-2a6f34b84c1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.561996 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b686536e-83b5-45a3-8266-2a6f34b84c1c" (UID: "b686536e-83b5-45a3-8266-2a6f34b84c1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.562077 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts" (OuterVolumeSpecName: "kube-api-access-dr7ts") pod "b686536e-83b5-45a3-8266-2a6f34b84c1c" (UID: "b686536e-83b5-45a3-8266-2a6f34b84c1c"). InnerVolumeSpecName "kube-api-access-dr7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.644632 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr7ts\" (UniqueName: \"kubernetes.io/projected/b686536e-83b5-45a3-8266-2a6f34b84c1c-kube-api-access-dr7ts\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.644672 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b686536e-83b5-45a3-8266-2a6f34b84c1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:03 crc kubenswrapper[4772]: I0124 03:45:03.644688 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b686536e-83b5-45a3-8266-2a6f34b84c1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.224640 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerStarted","Data":"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553"} Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.226440 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" event={"ID":"9000a239-6b89-4eee-9527-0701538cf737","Type":"ContainerStarted","Data":"e304c9bc0d7fc01949b5db20a21b2b8effb3ff3aab3199c0b5311ccf8c59fc5f"} Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.226773 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.228320 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.228305 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487105-jltsm" event={"ID":"b686536e-83b5-45a3-8266-2a6f34b84c1c","Type":"ContainerDied","Data":"ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e"} Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.228488 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba739858378eaf39453ee837aa1abd99c237a514d529f206f71d0f5fa7ff0a9e" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.231764 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.248915 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqbcg" podStartSLOduration=2.821698881 podStartE2EDuration="52.24890058s" podCreationTimestamp="2026-01-24 03:44:12 +0000 UTC" firstStartedPulling="2026-01-24 03:44:14.235823795 +0000 UTC m=+151.272914520" lastFinishedPulling="2026-01-24 03:45:03.663025494 +0000 UTC m=+200.700116219" observedRunningTime="2026-01-24 03:45:04.246312238 +0000 UTC m=+201.283402963" watchObservedRunningTime="2026-01-24 03:45:04.24890058 +0000 UTC m=+201.285991305" Jan 24 03:45:04 crc kubenswrapper[4772]: I0124 03:45:04.274402 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" podStartSLOduration=16.274384456 podStartE2EDuration="16.274384456s" podCreationTimestamp="2026-01-24 03:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:04.272512593 +0000 UTC m=+201.309603318" watchObservedRunningTime="2026-01-24 03:45:04.274384456 +0000 UTC m=+201.311475181" Jan 24 03:45:08 crc kubenswrapper[4772]: I0124 03:45:08.009811 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:08 crc kubenswrapper[4772]: I0124 03:45:08.010548 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" podUID="9000a239-6b89-4eee-9527-0701538cf737" containerName="controller-manager" containerID="cri-o://e304c9bc0d7fc01949b5db20a21b2b8effb3ff3aab3199c0b5311ccf8c59fc5f" gracePeriod=30 Jan 24 03:45:08 crc kubenswrapper[4772]: I0124 03:45:08.021152 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:08 crc kubenswrapper[4772]: I0124 03:45:08.021380 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerName="route-controller-manager" containerID="cri-o://3867b166dff55f03b4ac50c91e57108e1831a2710ade07b2052535bca4699dd5" gracePeriod=30 Jan 24 03:45:10 crc kubenswrapper[4772]: I0124 03:45:10.933572 4772 patch_prober.go:28] interesting pod/route-controller-manager-7cf97b9c8b-mvmbh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 24 03:45:10 crc kubenswrapper[4772]: I0124 03:45:10.933905 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 24 03:45:12 crc kubenswrapper[4772]: I0124 03:45:12.739828 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:12 crc kubenswrapper[4772]: I0124 03:45:12.739930 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:12 crc kubenswrapper[4772]: I0124 03:45:12.969333 4772 patch_prober.go:28] interesting pod/controller-manager-5f596b4b9d-tdqvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 24 03:45:12 crc kubenswrapper[4772]: I0124 03:45:12.969910 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" podUID="9000a239-6b89-4eee-9527-0701538cf737" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.004494 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7cf97b9c8b-mvmbh_d4c18fca-913e-416b-bd42-9b0bc69569a0/route-controller-manager/0.log" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.004869 4772 generic.go:334] "Generic (PLEG): container finished" podID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerID="3867b166dff55f03b4ac50c91e57108e1831a2710ade07b2052535bca4699dd5" exitCode=-1 Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.005000 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" event={"ID":"d4c18fca-913e-416b-bd42-9b0bc69569a0","Type":"ContainerDied","Data":"3867b166dff55f03b4ac50c91e57108e1831a2710ade07b2052535bca4699dd5"} Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.584497 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.899955 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.900542 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.900619 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.901715 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 03:45:16 crc kubenswrapper[4772]: I0124 03:45:16.901797 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7" gracePeriod=600 Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.121826 4772 generic.go:334] "Generic (PLEG): container finished" podID="9000a239-6b89-4eee-9527-0701538cf737" containerID="e304c9bc0d7fc01949b5db20a21b2b8effb3ff3aab3199c0b5311ccf8c59fc5f" exitCode=0 Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.121934 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" event={"ID":"9000a239-6b89-4eee-9527-0701538cf737","Type":"ContainerDied","Data":"e304c9bc0d7fc01949b5db20a21b2b8effb3ff3aab3199c0b5311ccf8c59fc5f"} Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.188000 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.225151 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:17 crc kubenswrapper[4772]: E0124 03:45:17.227449 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b686536e-83b5-45a3-8266-2a6f34b84c1c" containerName="collect-profiles" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.227920 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b686536e-83b5-45a3-8266-2a6f34b84c1c" containerName="collect-profiles" Jan 24 03:45:17 crc kubenswrapper[4772]: E0124 03:45:17.228014 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9000a239-6b89-4eee-9527-0701538cf737" containerName="controller-manager" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.228070 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9000a239-6b89-4eee-9527-0701538cf737" containerName="controller-manager" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.228277 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9000a239-6b89-4eee-9527-0701538cf737" containerName="controller-manager" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.228373 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b686536e-83b5-45a3-8266-2a6f34b84c1c" containerName="collect-profiles" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.229064 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.237975 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.262363 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.350895 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert\") pod \"9000a239-6b89-4eee-9527-0701538cf737\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.350985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54b4n\" (UniqueName: \"kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n\") pod \"9000a239-6b89-4eee-9527-0701538cf737\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351010 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hq2\" (UniqueName: \"kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2\") pod \"d4c18fca-913e-416b-bd42-9b0bc69569a0\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351048 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles\") pod \"9000a239-6b89-4eee-9527-0701538cf737\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config\") pod \"9000a239-6b89-4eee-9527-0701538cf737\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351108 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca\") pod \"d4c18fca-913e-416b-bd42-9b0bc69569a0\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351169 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca\") pod \"9000a239-6b89-4eee-9527-0701538cf737\" (UID: \"9000a239-6b89-4eee-9527-0701538cf737\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351194 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config\") pod \"d4c18fca-913e-416b-bd42-9b0bc69569a0\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert\") pod \"d4c18fca-913e-416b-bd42-9b0bc69569a0\" (UID: \"d4c18fca-913e-416b-bd42-9b0bc69569a0\") " Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.351433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.352384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9000a239-6b89-4eee-9527-0701538cf737" (UID: "9000a239-6b89-4eee-9527-0701538cf737"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.352410 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca" (OuterVolumeSpecName: "client-ca") pod "9000a239-6b89-4eee-9527-0701538cf737" (UID: "9000a239-6b89-4eee-9527-0701538cf737"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.352602 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config" (OuterVolumeSpecName: "config") pod "9000a239-6b89-4eee-9527-0701538cf737" (UID: "9000a239-6b89-4eee-9527-0701538cf737"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353079 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config" (OuterVolumeSpecName: "config") pod "d4c18fca-913e-416b-bd42-9b0bc69569a0" (UID: "d4c18fca-913e-416b-bd42-9b0bc69569a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r27qv\" (UniqueName: \"kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353264 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353303 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353334 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353392 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353404 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353414 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.353424 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9000a239-6b89-4eee-9527-0701538cf737-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.356112 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4c18fca-913e-416b-bd42-9b0bc69569a0" (UID: "d4c18fca-913e-416b-bd42-9b0bc69569a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.373228 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n" (OuterVolumeSpecName: "kube-api-access-54b4n") pod "9000a239-6b89-4eee-9527-0701538cf737" (UID: "9000a239-6b89-4eee-9527-0701538cf737"). InnerVolumeSpecName "kube-api-access-54b4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.373311 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2" (OuterVolumeSpecName: "kube-api-access-v7hq2") pod "d4c18fca-913e-416b-bd42-9b0bc69569a0" (UID: "d4c18fca-913e-416b-bd42-9b0bc69569a0"). InnerVolumeSpecName "kube-api-access-v7hq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.373779 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9000a239-6b89-4eee-9527-0701538cf737" (UID: "9000a239-6b89-4eee-9527-0701538cf737"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.373837 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4c18fca-913e-416b-bd42-9b0bc69569a0" (UID: "d4c18fca-913e-416b-bd42-9b0bc69569a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456338 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r27qv\" (UniqueName: \"kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456548 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9000a239-6b89-4eee-9527-0701538cf737-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456558 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54b4n\" (UniqueName: \"kubernetes.io/projected/9000a239-6b89-4eee-9527-0701538cf737-kube-api-access-54b4n\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456570 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hq2\" (UniqueName: \"kubernetes.io/projected/d4c18fca-913e-416b-bd42-9b0bc69569a0-kube-api-access-v7hq2\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456579 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c18fca-913e-416b-bd42-9b0bc69569a0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.456586 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c18fca-913e-416b-bd42-9b0bc69569a0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.457837 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.458049 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.458574 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.462556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.473871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r27qv\" (UniqueName: \"kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv\") pod \"controller-manager-6477d8465-w5m57\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:17 crc kubenswrapper[4772]: I0124 03:45:17.624748 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.131452 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerStarted","Data":"0b318248366a83b067bd93e230709317a6d25e468321c02707f07616b977cdda"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.134010 4772 generic.go:334] "Generic (PLEG): container finished" podID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerID="80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64" exitCode=0 Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.134085 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerDied","Data":"80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.140064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerStarted","Data":"5ba27589ef06b2fd278854db4dc1be7bb5c6ab8cd8329db2e072d752e94ac19a"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.148912 4772 generic.go:334] "Generic (PLEG): container finished" podID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerID="68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753" exitCode=0 Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.149010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerDied","Data":"68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.153188 4772 generic.go:334] "Generic (PLEG): container finished" podID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerID="369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70" exitCode=0 Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.153297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerDied","Data":"369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.168015 4772 generic.go:334] "Generic (PLEG): container finished" podID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerID="993f9d3167d64d9d798626f641152e902962f1c8a7e401eae910b7e70e570f1b" exitCode=0 Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.168111 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerDied","Data":"993f9d3167d64d9d798626f641152e902962f1c8a7e401eae910b7e70e570f1b"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.172012 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.172006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh" event={"ID":"d4c18fca-913e-416b-bd42-9b0bc69569a0","Type":"ContainerDied","Data":"1f938fd10fae2f87b4765accf670b53bd171dfe4be7cd93d0bf14bdd6406f534"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.172481 4772 scope.go:117] "RemoveContainer" containerID="3867b166dff55f03b4ac50c91e57108e1831a2710ade07b2052535bca4699dd5" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.180185 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7" exitCode=0 Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.180324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.180378 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.183334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerStarted","Data":"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.185640 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.186000 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn" event={"ID":"9000a239-6b89-4eee-9527-0701538cf737","Type":"ContainerDied","Data":"85d2a78216eb05f316f3adf144ca574504408d8c0baf13d5f81216862d1db79e"} Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.206557 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.253327 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.437176 4772 scope.go:117] "RemoveContainer" containerID="e304c9bc0d7fc01949b5db20a21b2b8effb3ff3aab3199c0b5311ccf8c59fc5f" Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.462212 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.465124 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf97b9c8b-mvmbh"] Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.486954 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:18 crc kubenswrapper[4772]: I0124 03:45:18.494971 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f596b4b9d-tdqvn"] Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.194302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerStarted","Data":"69bef0795b21db95c6fba2366d88fa9cdf689f3a3c0a7fc76705a7a0f7969008"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.196050 4772 generic.go:334] "Generic (PLEG): container finished" podID="d124ff24-991c-4a60-997d-b899e8387e0d" containerID="0b318248366a83b067bd93e230709317a6d25e468321c02707f07616b977cdda" exitCode=0 Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.196117 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerDied","Data":"0b318248366a83b067bd93e230709317a6d25e468321c02707f07616b977cdda"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.199519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" event={"ID":"913d9835-4ea4-41f6-9af7-421be37d3ef2","Type":"ContainerStarted","Data":"4cdce5cc45d208616fb8c14f7adbd092e417e997a9b9e6dc1822ea0b72ad22cb"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.199556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" event={"ID":"913d9835-4ea4-41f6-9af7-421be37d3ef2","Type":"ContainerStarted","Data":"e2be1af3466e9869ee242d9211cff216638836f1d3412ad8d8959b39caf04825"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.199574 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.202468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerStarted","Data":"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.203786 4772 generic.go:334] "Generic (PLEG): container finished" podID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerID="5ba27589ef06b2fd278854db4dc1be7bb5c6ab8cd8329db2e072d752e94ac19a" exitCode=0 Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.203861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerDied","Data":"5ba27589ef06b2fd278854db4dc1be7bb5c6ab8cd8329db2e072d752e94ac19a"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.212586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerStarted","Data":"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.214897 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j67b7" podStartSLOduration=1.816229187 podStartE2EDuration="1m5.214877959s" podCreationTimestamp="2026-01-24 03:44:14 +0000 UTC" firstStartedPulling="2026-01-24 03:44:15.291988463 +0000 UTC m=+152.329079188" lastFinishedPulling="2026-01-24 03:45:18.690637235 +0000 UTC m=+215.727727960" observedRunningTime="2026-01-24 03:45:19.214599231 +0000 UTC m=+216.251689976" watchObservedRunningTime="2026-01-24 03:45:19.214877959 +0000 UTC m=+216.251968684" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.214985 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.220809 4772 generic.go:334] "Generic (PLEG): container finished" podID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerID="8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6" exitCode=0 Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.220911 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerDied","Data":"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.225716 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerStarted","Data":"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b"} Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.234073 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgctw" podStartSLOduration=3.936312189 podStartE2EDuration="1m8.234050988s" podCreationTimestamp="2026-01-24 03:44:11 +0000 UTC" firstStartedPulling="2026-01-24 03:44:14.264847619 +0000 UTC m=+151.301938344" lastFinishedPulling="2026-01-24 03:45:18.562586378 +0000 UTC m=+215.599677143" observedRunningTime="2026-01-24 03:45:19.229342436 +0000 UTC m=+216.266433171" watchObservedRunningTime="2026-01-24 03:45:19.234050988 +0000 UTC m=+216.271141733" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.274657 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" podStartSLOduration=11.274628818 podStartE2EDuration="11.274628818s" podCreationTimestamp="2026-01-24 03:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:19.273477835 +0000 UTC m=+216.310568570" watchObservedRunningTime="2026-01-24 03:45:19.274628818 +0000 UTC m=+216.311719563" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.305760 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.313243 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkqlj" podStartSLOduration=3.034040106 podStartE2EDuration="1m5.313217901s" podCreationTimestamp="2026-01-24 03:44:14 +0000 UTC" firstStartedPulling="2026-01-24 03:44:16.328667174 +0000 UTC m=+153.365757899" lastFinishedPulling="2026-01-24 03:45:18.607844969 +0000 UTC m=+215.644935694" observedRunningTime="2026-01-24 03:45:19.30283893 +0000 UTC m=+216.339929655" watchObservedRunningTime="2026-01-24 03:45:19.313217901 +0000 UTC m=+216.350308626" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.376205 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcjhk" podStartSLOduration=3.078838594 podStartE2EDuration="1m6.37618413s" podCreationTimestamp="2026-01-24 03:44:13 +0000 UTC" firstStartedPulling="2026-01-24 03:44:15.315703379 +0000 UTC m=+152.352794094" lastFinishedPulling="2026-01-24 03:45:18.613048895 +0000 UTC m=+215.650139630" observedRunningTime="2026-01-24 03:45:19.373020941 +0000 UTC m=+216.410111666" watchObservedRunningTime="2026-01-24 03:45:19.37618413 +0000 UTC m=+216.413274855" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.655494 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:19 crc kubenswrapper[4772]: E0124 03:45:19.655952 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerName="route-controller-manager" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.655973 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerName="route-controller-manager" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.656092 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" containerName="route-controller-manager" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.656612 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.659844 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.660104 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.660307 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.660486 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.660626 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.667398 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9000a239-6b89-4eee-9527-0701538cf737" path="/var/lib/kubelet/pods/9000a239-6b89-4eee-9527-0701538cf737/volumes" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.668237 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c18fca-913e-416b-bd42-9b0bc69569a0" path="/var/lib/kubelet/pods/d4c18fca-913e-416b-bd42-9b0bc69569a0/volumes" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.678455 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.690668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.804875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.806054 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsqh\" (UniqueName: \"kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.806096 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.806242 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.907319 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsqh\" (UniqueName: \"kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.907374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.907432 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.907459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.908818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.909401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.917258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.945070 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsqh\" (UniqueName: \"kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh\") pod \"route-controller-manager-5d76486c96-m4kcq\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:19 crc kubenswrapper[4772]: I0124 03:45:19.977029 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.234341 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerStarted","Data":"bf8553276efedf44fcc946e1e54c7d397c9786f720a3e6894cb3b83673dcc17a"} Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.238963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerStarted","Data":"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b"} Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.243180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerStarted","Data":"d20cab397c4b49bed38271ef533647c6a39cfff66dae69c6f944272d00abaca6"} Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.243530 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fqbcg" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="registry-server" containerID="cri-o://5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553" gracePeriod=2 Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.281029 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-prjh9" podStartSLOduration=2.891021757 podStartE2EDuration="1m8.281000805s" podCreationTimestamp="2026-01-24 03:44:12 +0000 UTC" firstStartedPulling="2026-01-24 03:44:14.277997508 +0000 UTC m=+151.315088233" lastFinishedPulling="2026-01-24 03:45:19.667976556 +0000 UTC m=+216.705067281" observedRunningTime="2026-01-24 03:45:20.277488886 +0000 UTC m=+217.314579611" watchObservedRunningTime="2026-01-24 03:45:20.281000805 +0000 UTC m=+217.318091550" Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.330481 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8nr2" podStartSLOduration=1.846381125 podStartE2EDuration="1m5.330462594s" podCreationTimestamp="2026-01-24 03:44:15 +0000 UTC" firstStartedPulling="2026-01-24 03:44:16.318247861 +0000 UTC m=+153.355338576" lastFinishedPulling="2026-01-24 03:45:19.80232932 +0000 UTC m=+216.839420045" observedRunningTime="2026-01-24 03:45:20.327945633 +0000 UTC m=+217.365036368" watchObservedRunningTime="2026-01-24 03:45:20.330462594 +0000 UTC m=+217.367553319" Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.373567 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j67dm" podStartSLOduration=3.745209766 podStartE2EDuration="1m9.373539473s" podCreationTimestamp="2026-01-24 03:44:11 +0000 UTC" firstStartedPulling="2026-01-24 03:44:14.102899466 +0000 UTC m=+151.139990191" lastFinishedPulling="2026-01-24 03:45:19.731229163 +0000 UTC m=+216.768319898" observedRunningTime="2026-01-24 03:45:20.372317128 +0000 UTC m=+217.409407853" watchObservedRunningTime="2026-01-24 03:45:20.373539473 +0000 UTC m=+217.410630198" Jan 24 03:45:20 crc kubenswrapper[4772]: I0124 03:45:20.426091 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:20 crc kubenswrapper[4772]: W0124 03:45:20.454670 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcef7054_c83c_4aef_ae9a_b080fb1b0296.slice/crio-badbb4abf15726d052c1ad1645f143303d39f741931a5449686f162d9725a9e9 WatchSource:0}: Error finding container badbb4abf15726d052c1ad1645f143303d39f741931a5449686f162d9725a9e9: Status 404 returned error can't find the container with id badbb4abf15726d052c1ad1645f143303d39f741931a5449686f162d9725a9e9 Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.209077 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.251540 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" event={"ID":"dcef7054-c83c-4aef-ae9a-b080fb1b0296","Type":"ContainerStarted","Data":"d4d0bd7d50372bf4ed5835bf2c37d0c6c0cdbef19f6e487f1f29cae4516e8804"} Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.251591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" event={"ID":"dcef7054-c83c-4aef-ae9a-b080fb1b0296","Type":"ContainerStarted","Data":"badbb4abf15726d052c1ad1645f143303d39f741931a5449686f162d9725a9e9"} Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.251999 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.255430 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerID="5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553" exitCode=0 Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.255583 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerDied","Data":"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553"} Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.255643 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbcg" event={"ID":"0b681c8c-16cd-49d7-b3ac-facfe4238b0d","Type":"ContainerDied","Data":"dd2ae1d757bbe5b1e27816c10ee80f51457966df06c60a8f01082f83650dbd1b"} Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.255663 4772 scope.go:117] "RemoveContainer" containerID="5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.255913 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbcg" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.263767 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.276352 4772 scope.go:117] "RemoveContainer" containerID="d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.283344 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" podStartSLOduration=13.283328867 podStartE2EDuration="13.283328867s" podCreationTimestamp="2026-01-24 03:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:21.28130441 +0000 UTC m=+218.318395135" watchObservedRunningTime="2026-01-24 03:45:21.283328867 +0000 UTC m=+218.320419592" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.308370 4772 scope.go:117] "RemoveContainer" containerID="344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.321506 4772 scope.go:117] "RemoveContainer" containerID="5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553" Jan 24 03:45:21 crc kubenswrapper[4772]: E0124 03:45:21.322027 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553\": container with ID starting with 5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553 not found: ID does not exist" containerID="5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.322090 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553"} err="failed to get container status \"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553\": rpc error: code = NotFound desc = could not find container \"5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553\": container with ID starting with 5578d6a4b0409fe1dc5a66f1948ba4b52243e5ac16e0f12b1cba6d4da2e5d553 not found: ID does not exist" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.322132 4772 scope.go:117] "RemoveContainer" containerID="d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9" Jan 24 03:45:21 crc kubenswrapper[4772]: E0124 03:45:21.322451 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9\": container with ID starting with d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9 not found: ID does not exist" containerID="d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.322476 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9"} err="failed to get container status \"d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9\": rpc error: code = NotFound desc = could not find container \"d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9\": container with ID starting with d7aebc02e56fd381784360f605e73175266ab4786397f6a4ebc2ee3a12bae0b9 not found: ID does not exist" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.322497 4772 scope.go:117] "RemoveContainer" containerID="344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35" Jan 24 03:45:21 crc kubenswrapper[4772]: E0124 03:45:21.322680 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35\": container with ID starting with 344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35 not found: ID does not exist" containerID="344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.322712 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35"} err="failed to get container status \"344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35\": rpc error: code = NotFound desc = could not find container \"344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35\": container with ID starting with 344cc384e80ed14a2751f2f732a2023ffb3fdc6e5594ee169445d9cde9477d35 not found: ID does not exist" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.330796 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content\") pod \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.330876 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities\") pod \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.330940 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqdvx\" (UniqueName: \"kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx\") pod \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\" (UID: \"0b681c8c-16cd-49d7-b3ac-facfe4238b0d\") " Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.331801 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities" (OuterVolumeSpecName: "utilities") pod "0b681c8c-16cd-49d7-b3ac-facfe4238b0d" (UID: "0b681c8c-16cd-49d7-b3ac-facfe4238b0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.354978 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx" (OuterVolumeSpecName: "kube-api-access-wqdvx") pod "0b681c8c-16cd-49d7-b3ac-facfe4238b0d" (UID: "0b681c8c-16cd-49d7-b3ac-facfe4238b0d"). InnerVolumeSpecName "kube-api-access-wqdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.412491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b681c8c-16cd-49d7-b3ac-facfe4238b0d" (UID: "0b681c8c-16cd-49d7-b3ac-facfe4238b0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.432848 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.432893 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqdvx\" (UniqueName: \"kubernetes.io/projected/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-kube-api-access-wqdvx\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.432909 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b681c8c-16cd-49d7-b3ac-facfe4238b0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.583082 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.589300 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fqbcg"] Jan 24 03:45:21 crc kubenswrapper[4772]: I0124 03:45:21.666603 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" path="/var/lib/kubelet/pods/0b681c8c-16cd-49d7-b3ac-facfe4238b0d/volumes" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.055253 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.055319 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.299128 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.299190 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.350441 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.539914 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.539985 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:22 crc kubenswrapper[4772]: I0124 03:45:22.606228 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:23 crc kubenswrapper[4772]: I0124 03:45:23.121894 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j67dm" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="registry-server" probeResult="failure" output=< Jan 24 03:45:23 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 03:45:23 crc kubenswrapper[4772]: > Jan 24 03:45:23 crc kubenswrapper[4772]: I0124 03:45:23.316697 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.086230 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.086315 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.131772 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.341875 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.530025 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.530509 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:24 crc kubenswrapper[4772]: I0124 03:45:24.589960 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:25 crc kubenswrapper[4772]: I0124 03:45:25.239802 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:45:25 crc kubenswrapper[4772]: I0124 03:45:25.239860 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:45:25 crc kubenswrapper[4772]: I0124 03:45:25.322173 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:25 crc kubenswrapper[4772]: I0124 03:45:25.664639 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:45:25 crc kubenswrapper[4772]: I0124 03:45:25.664685 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:45:26 crc kubenswrapper[4772]: I0124 03:45:26.281891 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkqlj" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="registry-server" probeResult="failure" output=< Jan 24 03:45:26 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 03:45:26 crc kubenswrapper[4772]: > Jan 24 03:45:26 crc kubenswrapper[4772]: I0124 03:45:26.698181 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8nr2" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="registry-server" probeResult="failure" output=< Jan 24 03:45:26 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 03:45:26 crc kubenswrapper[4772]: > Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.032470 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.033025 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" podUID="913d9835-4ea4-41f6-9af7-421be37d3ef2" containerName="controller-manager" containerID="cri-o://4cdce5cc45d208616fb8c14f7adbd092e417e997a9b9e6dc1822ea0b72ad22cb" gracePeriod=30 Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.177324 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.177562 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" podUID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" containerName="route-controller-manager" containerID="cri-o://d4d0bd7d50372bf4ed5835bf2c37d0c6c0cdbef19f6e487f1f29cae4516e8804" gracePeriod=30 Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.317041 4772 generic.go:334] "Generic (PLEG): container finished" podID="913d9835-4ea4-41f6-9af7-421be37d3ef2" containerID="4cdce5cc45d208616fb8c14f7adbd092e417e997a9b9e6dc1822ea0b72ad22cb" exitCode=0 Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.317131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" event={"ID":"913d9835-4ea4-41f6-9af7-421be37d3ef2","Type":"ContainerDied","Data":"4cdce5cc45d208616fb8c14f7adbd092e417e997a9b9e6dc1822ea0b72ad22cb"} Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.679419 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.778285 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r27qv\" (UniqueName: \"kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv\") pod \"913d9835-4ea4-41f6-9af7-421be37d3ef2\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.778449 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert\") pod \"913d9835-4ea4-41f6-9af7-421be37d3ef2\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.778501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles\") pod \"913d9835-4ea4-41f6-9af7-421be37d3ef2\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.778624 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca\") pod \"913d9835-4ea4-41f6-9af7-421be37d3ef2\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.778660 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config\") pod \"913d9835-4ea4-41f6-9af7-421be37d3ef2\" (UID: \"913d9835-4ea4-41f6-9af7-421be37d3ef2\") " Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.779473 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "913d9835-4ea4-41f6-9af7-421be37d3ef2" (UID: "913d9835-4ea4-41f6-9af7-421be37d3ef2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.779685 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca" (OuterVolumeSpecName: "client-ca") pod "913d9835-4ea4-41f6-9af7-421be37d3ef2" (UID: "913d9835-4ea4-41f6-9af7-421be37d3ef2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.779706 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config" (OuterVolumeSpecName: "config") pod "913d9835-4ea4-41f6-9af7-421be37d3ef2" (UID: "913d9835-4ea4-41f6-9af7-421be37d3ef2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.787084 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv" (OuterVolumeSpecName: "kube-api-access-r27qv") pod "913d9835-4ea4-41f6-9af7-421be37d3ef2" (UID: "913d9835-4ea4-41f6-9af7-421be37d3ef2"). InnerVolumeSpecName "kube-api-access-r27qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.789027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "913d9835-4ea4-41f6-9af7-421be37d3ef2" (UID: "913d9835-4ea4-41f6-9af7-421be37d3ef2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.880096 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r27qv\" (UniqueName: \"kubernetes.io/projected/913d9835-4ea4-41f6-9af7-421be37d3ef2-kube-api-access-r27qv\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.880169 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913d9835-4ea4-41f6-9af7-421be37d3ef2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.880189 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.880206 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:28 crc kubenswrapper[4772]: I0124 03:45:28.880222 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913d9835-4ea4-41f6-9af7-421be37d3ef2-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.325462 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.325424 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6477d8465-w5m57" event={"ID":"913d9835-4ea4-41f6-9af7-421be37d3ef2","Type":"ContainerDied","Data":"e2be1af3466e9869ee242d9211cff216638836f1d3412ad8d8959b39caf04825"} Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.325677 4772 scope.go:117] "RemoveContainer" containerID="4cdce5cc45d208616fb8c14f7adbd092e417e997a9b9e6dc1822ea0b72ad22cb" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.327924 4772 generic.go:334] "Generic (PLEG): container finished" podID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" containerID="d4d0bd7d50372bf4ed5835bf2c37d0c6c0cdbef19f6e487f1f29cae4516e8804" exitCode=0 Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.328035 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" event={"ID":"dcef7054-c83c-4aef-ae9a-b080fb1b0296","Type":"ContainerDied","Data":"d4d0bd7d50372bf4ed5835bf2c37d0c6c0cdbef19f6e487f1f29cae4516e8804"} Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.371457 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.377436 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6477d8465-w5m57"] Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.667397 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913d9835-4ea4-41f6-9af7-421be37d3ef2" path="/var/lib/kubelet/pods/913d9835-4ea4-41f6-9af7-421be37d3ef2/volumes" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.667881 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d44df949-77pxp"] Jan 24 03:45:29 crc kubenswrapper[4772]: E0124 03:45:29.668058 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d9835-4ea4-41f6-9af7-421be37d3ef2" containerName="controller-manager" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668071 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d9835-4ea4-41f6-9af7-421be37d3ef2" containerName="controller-manager" Jan 24 03:45:29 crc kubenswrapper[4772]: E0124 03:45:29.668083 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="extract-utilities" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668089 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="extract-utilities" Jan 24 03:45:29 crc kubenswrapper[4772]: E0124 03:45:29.668099 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="registry-server" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668106 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="registry-server" Jan 24 03:45:29 crc kubenswrapper[4772]: E0124 03:45:29.668120 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="extract-content" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668127 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="extract-content" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668224 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d9835-4ea4-41f6-9af7-421be37d3ef2" containerName="controller-manager" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668240 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b681c8c-16cd-49d7-b3ac-facfe4238b0d" containerName="registry-server" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.668630 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.676400 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.676637 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.676812 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.676632 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.677701 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.677937 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.688477 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.693334 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d44df949-77pxp"] Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.707928 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.724035 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j67b7" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="registry-server" containerID="cri-o://69bef0795b21db95c6fba2366d88fa9cdf689f3a3c0a7fc76705a7a0f7969008" gracePeriod=2 Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.795701 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b91dd91-02e0-44fa-8440-6e85e1f9960c-serving-cert\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.795795 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbbd\" (UniqueName: \"kubernetes.io/projected/9b91dd91-02e0-44fa-8440-6e85e1f9960c-kube-api-access-dpbbd\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.795860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-config\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.795942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-client-ca\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.795969 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-proxy-ca-bundles\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.897215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-client-ca\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.897685 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-proxy-ca-bundles\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.897764 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b91dd91-02e0-44fa-8440-6e85e1f9960c-serving-cert\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.897795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbbd\" (UniqueName: \"kubernetes.io/projected/9b91dd91-02e0-44fa-8440-6e85e1f9960c-kube-api-access-dpbbd\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.897833 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-config\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.898247 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-client-ca\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.899470 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-config\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.900283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b91dd91-02e0-44fa-8440-6e85e1f9960c-proxy-ca-bundles\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.910796 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b91dd91-02e0-44fa-8440-6e85e1f9960c-serving-cert\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.915781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbbd\" (UniqueName: \"kubernetes.io/projected/9b91dd91-02e0-44fa-8440-6e85e1f9960c-kube-api-access-dpbbd\") pod \"controller-manager-5d44df949-77pxp\" (UID: \"9b91dd91-02e0-44fa-8440-6e85e1f9960c\") " pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:29 crc kubenswrapper[4772]: I0124 03:45:29.950227 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.016658 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.099965 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpsqh\" (UniqueName: \"kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh\") pod \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.100092 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config\") pod \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.100146 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca\") pod \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.100202 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert\") pod \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\" (UID: \"dcef7054-c83c-4aef-ae9a-b080fb1b0296\") " Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.102824 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca" (OuterVolumeSpecName: "client-ca") pod "dcef7054-c83c-4aef-ae9a-b080fb1b0296" (UID: "dcef7054-c83c-4aef-ae9a-b080fb1b0296"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.104521 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh" (OuterVolumeSpecName: "kube-api-access-tpsqh") pod "dcef7054-c83c-4aef-ae9a-b080fb1b0296" (UID: "dcef7054-c83c-4aef-ae9a-b080fb1b0296"). InnerVolumeSpecName "kube-api-access-tpsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.104943 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config" (OuterVolumeSpecName: "config") pod "dcef7054-c83c-4aef-ae9a-b080fb1b0296" (UID: "dcef7054-c83c-4aef-ae9a-b080fb1b0296"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.106119 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcef7054-c83c-4aef-ae9a-b080fb1b0296" (UID: "dcef7054-c83c-4aef-ae9a-b080fb1b0296"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.201760 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpsqh\" (UniqueName: \"kubernetes.io/projected/dcef7054-c83c-4aef-ae9a-b080fb1b0296-kube-api-access-tpsqh\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.201797 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.201812 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef7054-c83c-4aef-ae9a-b080fb1b0296-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.201823 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef7054-c83c-4aef-ae9a-b080fb1b0296-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.217793 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d44df949-77pxp"] Jan 24 03:45:30 crc kubenswrapper[4772]: W0124 03:45:30.226036 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b91dd91_02e0_44fa_8440_6e85e1f9960c.slice/crio-58f9109e3411639bf03f59aee55a1ef014764e1727c2e808b49f2571ec17a7b7 WatchSource:0}: Error finding container 58f9109e3411639bf03f59aee55a1ef014764e1727c2e808b49f2571ec17a7b7: Status 404 returned error can't find the container with id 58f9109e3411639bf03f59aee55a1ef014764e1727c2e808b49f2571ec17a7b7 Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.334694 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" event={"ID":"dcef7054-c83c-4aef-ae9a-b080fb1b0296","Type":"ContainerDied","Data":"badbb4abf15726d052c1ad1645f143303d39f741931a5449686f162d9725a9e9"} Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.335101 4772 scope.go:117] "RemoveContainer" containerID="d4d0bd7d50372bf4ed5835bf2c37d0c6c0cdbef19f6e487f1f29cae4516e8804" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.334819 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq" Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.336882 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" event={"ID":"9b91dd91-02e0-44fa-8440-6e85e1f9960c","Type":"ContainerStarted","Data":"58f9109e3411639bf03f59aee55a1ef014764e1727c2e808b49f2571ec17a7b7"} Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.367174 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:30 crc kubenswrapper[4772]: I0124 03:45:30.369635 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d76486c96-m4kcq"] Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.351223 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" event={"ID":"9b91dd91-02e0-44fa-8440-6e85e1f9960c","Type":"ContainerStarted","Data":"c192c26d36b5b7eb0f5b8e19598bd0d442c53c79b402b09a7c8067ea98a911d8"} Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.351772 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.356998 4772 generic.go:334] "Generic (PLEG): container finished" podID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerID="69bef0795b21db95c6fba2366d88fa9cdf689f3a3c0a7fc76705a7a0f7969008" exitCode=0 Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.357053 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerDied","Data":"69bef0795b21db95c6fba2366d88fa9cdf689f3a3c0a7fc76705a7a0f7969008"} Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.359961 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.377476 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d44df949-77pxp" podStartSLOduration=3.377453696 podStartE2EDuration="3.377453696s" podCreationTimestamp="2026-01-24 03:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:31.375352177 +0000 UTC m=+228.412442952" watchObservedRunningTime="2026-01-24 03:45:31.377453696 +0000 UTC m=+228.414544421" Jan 24 03:45:31 crc kubenswrapper[4772]: I0124 03:45:31.665989 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" path="/var/lib/kubelet/pods/dcef7054-c83c-4aef-ae9a-b080fb1b0296/volumes" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.116379 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.160880 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.591130 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.661811 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq"] Jan 24 03:45:32 crc kubenswrapper[4772]: E0124 03:45:32.662363 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" containerName="route-controller-manager" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.662469 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" containerName="route-controller-manager" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.664106 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcef7054-c83c-4aef-ae9a-b080fb1b0296" containerName="route-controller-manager" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.665645 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.666980 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq"] Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.670214 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.670321 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.670645 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.671436 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.671816 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.671857 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.839470 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gnt\" (UniqueName: \"kubernetes.io/projected/83a32f50-61b4-4fbc-803c-6228ac53f794-kube-api-access-x8gnt\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.839565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-client-ca\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.839615 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a32f50-61b4-4fbc-803c-6228ac53f794-serving-cert\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.839659 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-config\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.941207 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8gnt\" (UniqueName: \"kubernetes.io/projected/83a32f50-61b4-4fbc-803c-6228ac53f794-kube-api-access-x8gnt\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.941290 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-client-ca\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.941324 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a32f50-61b4-4fbc-803c-6228ac53f794-serving-cert\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.941343 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-config\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.942217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-client-ca\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.942426 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a32f50-61b4-4fbc-803c-6228ac53f794-config\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.949474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a32f50-61b4-4fbc-803c-6228ac53f794-serving-cert\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.961424 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8gnt\" (UniqueName: \"kubernetes.io/projected/83a32f50-61b4-4fbc-803c-6228ac53f794-kube-api-access-x8gnt\") pod \"route-controller-manager-7c7cdcb8c6-d4tkq\" (UID: \"83a32f50-61b4-4fbc-803c-6228ac53f794\") " pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:32 crc kubenswrapper[4772]: I0124 03:45:32.984717 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.328330 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.372481 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j67b7" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.372488 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j67b7" event={"ID":"83971acc-e6fb-4ff2-b45b-7f0dda461036","Type":"ContainerDied","Data":"7ee38e727bb8b70ad934437c425f42452e0a865df37c5014e0c11dde908ea852"} Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.372601 4772 scope.go:117] "RemoveContainer" containerID="69bef0795b21db95c6fba2366d88fa9cdf689f3a3c0a7fc76705a7a0f7969008" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.392224 4772 scope.go:117] "RemoveContainer" containerID="993f9d3167d64d9d798626f641152e902962f1c8a7e401eae910b7e70e570f1b" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.406350 4772 scope.go:117] "RemoveContainer" containerID="d61abdb167ed3c6568a5c5e6cb56d3da84ef64155eaf5c8b5d76c5d0a3e7ee3d" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.453529 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities\") pod \"83971acc-e6fb-4ff2-b45b-7f0dda461036\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.453632 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content\") pod \"83971acc-e6fb-4ff2-b45b-7f0dda461036\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.453683 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k88x4\" (UniqueName: \"kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4\") pod \"83971acc-e6fb-4ff2-b45b-7f0dda461036\" (UID: \"83971acc-e6fb-4ff2-b45b-7f0dda461036\") " Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.455711 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities" (OuterVolumeSpecName: "utilities") pod "83971acc-e6fb-4ff2-b45b-7f0dda461036" (UID: "83971acc-e6fb-4ff2-b45b-7f0dda461036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.460423 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4" (OuterVolumeSpecName: "kube-api-access-k88x4") pod "83971acc-e6fb-4ff2-b45b-7f0dda461036" (UID: "83971acc-e6fb-4ff2-b45b-7f0dda461036"). InnerVolumeSpecName "kube-api-access-k88x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.482808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83971acc-e6fb-4ff2-b45b-7f0dda461036" (UID: "83971acc-e6fb-4ff2-b45b-7f0dda461036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.491400 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq"] Jan 24 03:45:33 crc kubenswrapper[4772]: W0124 03:45:33.497204 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a32f50_61b4_4fbc_803c_6228ac53f794.slice/crio-a7e7bf49fe3e19347012b7eb03e85fc286e75e141b72081d4fe1e13b514f97fc WatchSource:0}: Error finding container a7e7bf49fe3e19347012b7eb03e85fc286e75e141b72081d4fe1e13b514f97fc: Status 404 returned error can't find the container with id a7e7bf49fe3e19347012b7eb03e85fc286e75e141b72081d4fe1e13b514f97fc Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.556111 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.556669 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83971acc-e6fb-4ff2-b45b-7f0dda461036-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.556688 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k88x4\" (UniqueName: \"kubernetes.io/projected/83971acc-e6fb-4ff2-b45b-7f0dda461036-kube-api-access-k88x4\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.695097 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:45:33 crc kubenswrapper[4772]: I0124 03:45:33.697773 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j67b7"] Jan 24 03:45:34 crc kubenswrapper[4772]: I0124 03:45:34.379680 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" event={"ID":"83a32f50-61b4-4fbc-803c-6228ac53f794","Type":"ContainerStarted","Data":"d28f85112ac21d489639825d17b72c396d6e6215819ca92862c4dba0197db0bb"} Jan 24 03:45:34 crc kubenswrapper[4772]: I0124 03:45:34.379932 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" event={"ID":"83a32f50-61b4-4fbc-803c-6228ac53f794","Type":"ContainerStarted","Data":"a7e7bf49fe3e19347012b7eb03e85fc286e75e141b72081d4fe1e13b514f97fc"} Jan 24 03:45:34 crc kubenswrapper[4772]: I0124 03:45:34.705496 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mk8n7"] Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.099160 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.099386 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-prjh9" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="registry-server" containerID="cri-o://bf8553276efedf44fcc946e1e54c7d397c9786f720a3e6894cb3b83673dcc17a" gracePeriod=2 Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.287326 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.330133 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.412067 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" podStartSLOduration=7.412045128 podStartE2EDuration="7.412045128s" podCreationTimestamp="2026-01-24 03:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:35.406612356 +0000 UTC m=+232.443703081" watchObservedRunningTime="2026-01-24 03:45:35.412045128 +0000 UTC m=+232.449135853" Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.667482 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" path="/var/lib/kubelet/pods/83971acc-e6fb-4ff2-b45b-7f0dda461036/volumes" Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.704209 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:45:35 crc kubenswrapper[4772]: I0124 03:45:35.749879 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:45:36 crc kubenswrapper[4772]: I0124 03:45:36.410684 4772 generic.go:334] "Generic (PLEG): container finished" podID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerID="bf8553276efedf44fcc946e1e54c7d397c9786f720a3e6894cb3b83673dcc17a" exitCode=0 Jan 24 03:45:36 crc kubenswrapper[4772]: I0124 03:45:36.411502 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerDied","Data":"bf8553276efedf44fcc946e1e54c7d397c9786f720a3e6894cb3b83673dcc17a"} Jan 24 03:45:36 crc kubenswrapper[4772]: I0124 03:45:36.902380 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.005796 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f\") pod \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.005878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities\") pod \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.005976 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content\") pod \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\" (UID: \"621244af-16f4-4b04-aa0c-5c71a7d49eb5\") " Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.006817 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities" (OuterVolumeSpecName: "utilities") pod "621244af-16f4-4b04-aa0c-5c71a7d49eb5" (UID: "621244af-16f4-4b04-aa0c-5c71a7d49eb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.013466 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f" (OuterVolumeSpecName: "kube-api-access-xkf7f") pod "621244af-16f4-4b04-aa0c-5c71a7d49eb5" (UID: "621244af-16f4-4b04-aa0c-5c71a7d49eb5"). InnerVolumeSpecName "kube-api-access-xkf7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.022242 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.022275 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/621244af-16f4-4b04-aa0c-5c71a7d49eb5-kube-api-access-xkf7f\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.091652 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "621244af-16f4-4b04-aa0c-5c71a7d49eb5" (UID: "621244af-16f4-4b04-aa0c-5c71a7d49eb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.124028 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621244af-16f4-4b04-aa0c-5c71a7d49eb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.419163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prjh9" event={"ID":"621244af-16f4-4b04-aa0c-5c71a7d49eb5","Type":"ContainerDied","Data":"0e2085096f658aba326d3b80086dfe35ad5fd002b8af57714d3cb79a316fe98f"} Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.419263 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prjh9" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.419300 4772 scope.go:117] "RemoveContainer" containerID="bf8553276efedf44fcc946e1e54c7d397c9786f720a3e6894cb3b83673dcc17a" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.435516 4772 scope.go:117] "RemoveContainer" containerID="5ba27589ef06b2fd278854db4dc1be7bb5c6ab8cd8329db2e072d752e94ac19a" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.445561 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.448325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-prjh9"] Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.457833 4772 scope.go:117] "RemoveContainer" containerID="4d233950f01bf0b87b0b5f648ff1b76047f6220b85410540bdb0e070b4d1b4c2" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638387 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638662 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638678 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638689 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="extract-utilities" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638697 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="extract-utilities" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638717 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="extract-utilities" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638730 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="extract-utilities" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638770 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="extract-content" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638778 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="extract-content" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638787 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="extract-content" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638792 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="extract-content" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.638802 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638808 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638906 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.638922 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="83971acc-e6fb-4ff2-b45b-7f0dda461036" containerName="registry-server" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.639328 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640020 4772 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640474 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6" gracePeriod=15 Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640520 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903" gracePeriod=15 Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640532 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976" gracePeriod=15 Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640551 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96" gracePeriod=15 Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.640553 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630" gracePeriod=15 Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642480 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642671 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642689 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642724 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642752 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642763 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642771 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642780 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642787 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642800 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642806 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642815 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642820 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.642828 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642834 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642933 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642944 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642955 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642963 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642971 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.642977 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.676977 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621244af-16f4-4b04-aa0c-5c71a7d49eb5" path="/var/lib/kubelet/pods/621244af-16f4-4b04-aa0c-5c71a7d49eb5/volumes" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.677980 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731352 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731425 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731485 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.731552 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832730 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832788 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832810 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832844 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832877 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832946 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.832998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.833025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.833001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.833490 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.833599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: I0124 03:45:37.975603 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:45:37 crc kubenswrapper[4772]: W0124 03:45:37.995471 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b70f461465ea6df7646082e7f7a9f2d702a9c110157a5d00eec24c2f2825e929 WatchSource:0}: Error finding container b70f461465ea6df7646082e7f7a9f2d702a9c110157a5d00eec24c2f2825e929: Status 404 returned error can't find the container with id b70f461465ea6df7646082e7f7a9f2d702a9c110157a5d00eec24c2f2825e929 Jan 24 03:45:37 crc kubenswrapper[4772]: E0124 03:45:37.999689 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d8dfb0bb334e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 03:45:37.999074532 +0000 UTC m=+235.036165257,LastTimestamp:2026-01-24 03:45:37.999074532 +0000 UTC m=+235.036165257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.427824 4772 generic.go:334] "Generic (PLEG): container finished" podID="395b8998-d655-4164-b6ae-ba0fc8bd4434" containerID="84325b4a25e5cbb18ad7d62436052525ae290e8a64b8ef8f75a5cd7daa26a374" exitCode=0 Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.427908 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"395b8998-d655-4164-b6ae-ba0fc8bd4434","Type":"ContainerDied","Data":"84325b4a25e5cbb18ad7d62436052525ae290e8a64b8ef8f75a5cd7daa26a374"} Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.428605 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.429140 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.429805 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22"} Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.429855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b70f461465ea6df7646082e7f7a9f2d702a9c110157a5d00eec24c2f2825e929"} Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.430367 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.430929 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.432236 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.434234 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.435374 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96" exitCode=0 Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.435429 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976" exitCode=0 Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.435452 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903" exitCode=0 Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.435476 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630" exitCode=2 Jan 24 03:45:38 crc kubenswrapper[4772]: I0124 03:45:38.435504 4772 scope.go:117] "RemoveContainer" containerID="5080c2c2105044159d33ad2184da4e0dcd63df42416b28e633580f72facd8cad" Jan 24 03:45:39 crc kubenswrapper[4772]: I0124 03:45:39.449062 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 03:45:39 crc kubenswrapper[4772]: I0124 03:45:39.885655 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:45:39 crc kubenswrapper[4772]: I0124 03:45:39.886504 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:39 crc kubenswrapper[4772]: I0124 03:45:39.887127 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.065609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "395b8998-d655-4164-b6ae-ba0fc8bd4434" (UID: "395b8998-d655-4164-b6ae-ba0fc8bd4434"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.065215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir\") pod \"395b8998-d655-4164-b6ae-ba0fc8bd4434\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.066073 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock\") pod \"395b8998-d655-4164-b6ae-ba0fc8bd4434\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.066137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock" (OuterVolumeSpecName: "var-lock") pod "395b8998-d655-4164-b6ae-ba0fc8bd4434" (UID: "395b8998-d655-4164-b6ae-ba0fc8bd4434"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.066198 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access\") pod \"395b8998-d655-4164-b6ae-ba0fc8bd4434\" (UID: \"395b8998-d655-4164-b6ae-ba0fc8bd4434\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.067496 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.067520 4772 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/395b8998-d655-4164-b6ae-ba0fc8bd4434-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.073690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "395b8998-d655-4164-b6ae-ba0fc8bd4434" (UID: "395b8998-d655-4164-b6ae-ba0fc8bd4434"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.168377 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b8998-d655-4164-b6ae-ba0fc8bd4434-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.459806 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.459807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"395b8998-d655-4164-b6ae-ba0fc8bd4434","Type":"ContainerDied","Data":"2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6"} Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.459953 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd9c12e3196d14d484d0fe3641ca8b749cbc964456de512b6271944a2f005d6" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.464980 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.466623 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6" exitCode=0 Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.492365 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.492853 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.501112 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.502070 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.502572 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.502748 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.502956 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674748 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674798 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674819 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674833 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.674997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.675108 4772 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.675118 4772 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:40 crc kubenswrapper[4772]: I0124 03:45:40.675127 4772 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.472924 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.473490 4772 scope.go:117] "RemoveContainer" containerID="b41f66776e15a4516a2f11b8471328f57d13f770619a717e9e5a9867c6d3ac96" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.473553 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.496063 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.496577 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.497093 4772 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.497382 4772 scope.go:117] "RemoveContainer" containerID="4e69d3e4c53a4df968b7daf0b73bd65f1df6e886fcb52a8afa09ce3a6aeb6976" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.514545 4772 scope.go:117] "RemoveContainer" containerID="153a60bb0e8436ea389f98124b8627835ada375720d4139a28e7a8fa87685903" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.529481 4772 scope.go:117] "RemoveContainer" containerID="a8a991618d233900d0e61f593491f54718075726debeeca60e6e3c93691b0630" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.539954 4772 scope.go:117] "RemoveContainer" containerID="83749f3ef58e427fa9b51338df26aef38e08c0d808062c156ba3aecf0573e5f6" Jan 24 03:45:41 crc kubenswrapper[4772]: E0124 03:45:41.541702 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d8dfb0bb334e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 03:45:37.999074532 +0000 UTC m=+235.036165257,LastTimestamp:2026-01-24 03:45:37.999074532 +0000 UTC m=+235.036165257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.556481 4772 scope.go:117] "RemoveContainer" containerID="105870bb01b052b347ff30a64229df70e2e2419a6aad9ab438ff6cea38e8bfc8" Jan 24 03:45:41 crc kubenswrapper[4772]: I0124 03:45:41.665591 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 24 03:45:42 crc kubenswrapper[4772]: E0124 03:45:42.707667 4772 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" volumeName="registry-storage" Jan 24 03:45:42 crc kubenswrapper[4772]: I0124 03:45:42.984896 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:42 crc kubenswrapper[4772]: I0124 03:45:42.989669 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" Jan 24 03:45:42 crc kubenswrapper[4772]: I0124 03:45:42.990383 4772 status_manager.go:851] "Failed to get status for pod" podUID="83a32f50-61b4-4fbc-803c-6228ac53f794" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7c7cdcb8c6-d4tkq\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:42 crc kubenswrapper[4772]: I0124 03:45:42.990794 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:42 crc kubenswrapper[4772]: I0124 03:45:42.991415 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:43 crc kubenswrapper[4772]: I0124 03:45:43.661938 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:43 crc kubenswrapper[4772]: I0124 03:45:43.662233 4772 status_manager.go:851] "Failed to get status for pod" podUID="83a32f50-61b4-4fbc-803c-6228ac53f794" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7c7cdcb8c6-d4tkq\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:43 crc kubenswrapper[4772]: I0124 03:45:43.662554 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.870707 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.871774 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.872348 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.873124 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.873558 4772 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:44 crc kubenswrapper[4772]: I0124 03:45:44.873618 4772 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 03:45:44 crc kubenswrapper[4772]: E0124 03:45:44.873926 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Jan 24 03:45:45 crc kubenswrapper[4772]: E0124 03:45:45.075211 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Jan 24 03:45:45 crc kubenswrapper[4772]: E0124 03:45:45.475993 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Jan 24 03:45:46 crc kubenswrapper[4772]: E0124 03:45:46.276879 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Jan 24 03:45:47 crc kubenswrapper[4772]: E0124 03:45:47.878279 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.658020 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.658732 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.659296 4772 status_manager.go:851] "Failed to get status for pod" podUID="83a32f50-61b4-4fbc-803c-6228ac53f794" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7c7cdcb8c6-d4tkq\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.659868 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.680617 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.680660 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:48 crc kubenswrapper[4772]: E0124 03:45:48.681091 4772 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:48 crc kubenswrapper[4772]: I0124 03:45:48.681390 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:48 crc kubenswrapper[4772]: W0124 03:45:48.710108 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5071d7447cd6a086fcdc95e3f59e167afa20a4f7cee1c50f81204ebf2440ebf5 WatchSource:0}: Error finding container 5071d7447cd6a086fcdc95e3f59e167afa20a4f7cee1c50f81204ebf2440ebf5: Status 404 returned error can't find the container with id 5071d7447cd6a086fcdc95e3f59e167afa20a4f7cee1c50f81204ebf2440ebf5 Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.529912 4772 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2ba42eb2541b67834e395852ef0c07784b56a87756230cfe14f671afd7a43284" exitCode=0 Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.529977 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2ba42eb2541b67834e395852ef0c07784b56a87756230cfe14f671afd7a43284"} Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.530017 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5071d7447cd6a086fcdc95e3f59e167afa20a4f7cee1c50f81204ebf2440ebf5"} Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.530399 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.530421 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:49 crc kubenswrapper[4772]: E0124 03:45:49.530910 4772 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.531113 4772 status_manager.go:851] "Failed to get status for pod" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.531529 4772 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:49 crc kubenswrapper[4772]: I0124 03:45:49.532055 4772 status_manager.go:851] "Failed to get status for pod" podUID="83a32f50-61b4-4fbc-803c-6228ac53f794" pod="openshift-route-controller-manager/route-controller-manager-7c7cdcb8c6-d4tkq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7c7cdcb8c6-d4tkq\": dial tcp 38.102.83.12:6443: connect: connection refused" Jan 24 03:45:50 crc kubenswrapper[4772]: I0124 03:45:50.539924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4baf20e04d97b58aa8683c416703c031c312d0dd2eaae97eee17d63e9e774902"} Jan 24 03:45:50 crc kubenswrapper[4772]: I0124 03:45:50.540289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5306968aff1329519bd281c0f4d7f8f04faf89fdfa27972c11cd3478ffb9e2d3"} Jan 24 03:45:50 crc kubenswrapper[4772]: I0124 03:45:50.540304 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"47311766110ce576ac6bd8567acfbb1f9cb868d536f68e5b5b024af75b7dd1e8"} Jan 24 03:45:51 crc kubenswrapper[4772]: I0124 03:45:51.548022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49d13c46709d0a71e7fa379ad0510d61c7cb5278831f8bcc6277f555d29a82a9"} Jan 24 03:45:51 crc kubenswrapper[4772]: I0124 03:45:51.548064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8458e7ef8dfbdec655b9f7af48b9dac1395938be4a00dd7f22034a8e8335b002"} Jan 24 03:45:51 crc kubenswrapper[4772]: I0124 03:45:51.548261 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:51 crc kubenswrapper[4772]: I0124 03:45:51.548276 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:51 crc kubenswrapper[4772]: I0124 03:45:51.548403 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:52 crc kubenswrapper[4772]: I0124 03:45:52.555823 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 03:45:52 crc kubenswrapper[4772]: I0124 03:45:52.556043 4772 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5" exitCode=1 Jan 24 03:45:52 crc kubenswrapper[4772]: I0124 03:45:52.556093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5"} Jan 24 03:45:52 crc kubenswrapper[4772]: I0124 03:45:52.556865 4772 scope.go:117] "RemoveContainer" containerID="9d932cf6268002016e61d0ad39bae72a93971144a205e4e0e2b4ff56103dd1b5" Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.565501 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.565874 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bde3313a7432532f6ad6ac69b0340f30afac658652e146ab7c8c1e25af328a06"} Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.682374 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.682436 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.688196 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:53 crc kubenswrapper[4772]: I0124 03:45:53.691993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:45:55 crc kubenswrapper[4772]: I0124 03:45:55.197490 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:45:55 crc kubenswrapper[4772]: I0124 03:45:55.205813 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:45:56 crc kubenswrapper[4772]: I0124 03:45:56.559198 4772 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:57 crc kubenswrapper[4772]: I0124 03:45:57.593809 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:57 crc kubenswrapper[4772]: I0124 03:45:57.595477 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:57 crc kubenswrapper[4772]: I0124 03:45:57.604360 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:45:57 crc kubenswrapper[4772]: I0124 03:45:57.608892 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2b10aa1e-83f8-4515-97b4-dc80752602fb" Jan 24 03:45:58 crc kubenswrapper[4772]: I0124 03:45:58.599860 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:58 crc kubenswrapper[4772]: I0124 03:45:58.599908 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:45:58 crc kubenswrapper[4772]: I0124 03:45:58.603679 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2b10aa1e-83f8-4515-97b4-dc80752602fb" Jan 24 03:45:59 crc kubenswrapper[4772]: I0124 03:45:59.760991 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerName="oauth-openshift" containerID="cri-o://68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb" gracePeriod=15 Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.300964 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448435 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448512 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448541 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448578 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448691 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448721 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rddvs\" (UniqueName: \"kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448808 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448832 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448857 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448886 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.448936 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert\") pod \"2ed4e912-e375-41c4-a319-a360e33e8fde\" (UID: \"2ed4e912-e375-41c4-a319-a360e33e8fde\") " Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.450041 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.450140 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.450149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.450487 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.450203 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.455875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.456556 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.456828 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs" (OuterVolumeSpecName: "kube-api-access-rddvs") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "kube-api-access-rddvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.457369 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.457505 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.459149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.463921 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.464225 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.464339 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2ed4e912-e375-41c4-a319-a360e33e8fde" (UID: "2ed4e912-e375-41c4-a319-a360e33e8fde"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550563 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550619 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550642 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550660 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550679 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550695 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550712 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550731 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550769 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550789 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550806 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550823 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ed4e912-e375-41c4-a319-a360e33e8fde-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550840 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rddvs\" (UniqueName: \"kubernetes.io/projected/2ed4e912-e375-41c4-a319-a360e33e8fde-kube-api-access-rddvs\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.550856 4772 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ed4e912-e375-41c4-a319-a360e33e8fde-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.614514 4772 generic.go:334] "Generic (PLEG): container finished" podID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerID="68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb" exitCode=0 Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.614557 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" event={"ID":"2ed4e912-e375-41c4-a319-a360e33e8fde","Type":"ContainerDied","Data":"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb"} Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.614591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" event={"ID":"2ed4e912-e375-41c4-a319-a360e33e8fde","Type":"ContainerDied","Data":"54ce1f99af4983fc2717ab5edc38ebf91754a43a99a572246448cc72075f2bbd"} Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.614608 4772 scope.go:117] "RemoveContainer" containerID="68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.614629 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mk8n7" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.651384 4772 scope.go:117] "RemoveContainer" containerID="68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb" Jan 24 03:46:00 crc kubenswrapper[4772]: E0124 03:46:00.652102 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb\": container with ID starting with 68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb not found: ID does not exist" containerID="68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb" Jan 24 03:46:00 crc kubenswrapper[4772]: I0124 03:46:00.652151 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb"} err="failed to get container status \"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb\": rpc error: code = NotFound desc = could not find container \"68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb\": container with ID starting with 68820b1f577c6958292d51b8ae9c5a8e08181ee1c8bd57bbce33a01a1142acbb not found: ID does not exist" Jan 24 03:46:02 crc kubenswrapper[4772]: I0124 03:46:02.823090 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 03:46:03 crc kubenswrapper[4772]: I0124 03:46:03.017153 4772 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 03:46:03 crc kubenswrapper[4772]: I0124 03:46:03.699035 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 03:46:04 crc kubenswrapper[4772]: I0124 03:46:04.676306 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 03:46:04 crc kubenswrapper[4772]: I0124 03:46:04.971582 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 03:46:05 crc kubenswrapper[4772]: I0124 03:46:05.374518 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 03:46:05 crc kubenswrapper[4772]: I0124 03:46:05.868699 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 03:46:05 crc kubenswrapper[4772]: I0124 03:46:05.911917 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 03:46:07 crc kubenswrapper[4772]: I0124 03:46:07.363108 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 03:46:07 crc kubenswrapper[4772]: I0124 03:46:07.963323 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 03:46:08 crc kubenswrapper[4772]: I0124 03:46:08.062568 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 03:46:08 crc kubenswrapper[4772]: I0124 03:46:08.068355 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 03:46:08 crc kubenswrapper[4772]: I0124 03:46:08.363648 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 03:46:09 crc kubenswrapper[4772]: I0124 03:46:09.390733 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 03:46:09 crc kubenswrapper[4772]: I0124 03:46:09.511197 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 03:46:09 crc kubenswrapper[4772]: I0124 03:46:09.888875 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.051324 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.348734 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.407635 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.670519 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.721413 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.791848 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.862260 4772 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.917470 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.949506 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 03:46:10 crc kubenswrapper[4772]: I0124 03:46:10.985439 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.157267 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.255116 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.327724 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.338125 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.496840 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.702998 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.736193 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.811110 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 03:46:11 crc kubenswrapper[4772]: I0124 03:46:11.903471 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.005264 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.152251 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.231882 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.342289 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.397531 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.421158 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.460808 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.477821 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.617540 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.688840 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.701219 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.701725 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.749362 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 03:46:12 crc kubenswrapper[4772]: I0124 03:46:12.869217 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.027900 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.082024 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.350558 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.667049 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.713872 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.714128 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.728828 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.755806 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.863374 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.912247 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 03:46:13 crc kubenswrapper[4772]: I0124 03:46:13.954499 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.001917 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.040086 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.054542 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.068945 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.135614 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.324080 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.405573 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.536343 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.545840 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.588659 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.696313 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.765528 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.808645 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.820398 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.885326 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.936279 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 03:46:14 crc kubenswrapper[4772]: I0124 03:46:14.967312 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.098150 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.157100 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.249872 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.422816 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.716201 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.721358 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.726787 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.738884 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.771322 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.798788 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.889881 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 03:46:15 crc kubenswrapper[4772]: I0124 03:46:15.929293 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.018529 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.018878 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.057194 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.171911 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.226964 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.266938 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.430935 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.569803 4772 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.634191 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.717189 4772 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.750805 4772 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.752325 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.752309535 podStartE2EDuration="39.752309535s" podCreationTimestamp="2026-01-24 03:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:45:56.57523128 +0000 UTC m=+253.612322005" watchObservedRunningTime="2026-01-24 03:46:16.752309535 +0000 UTC m=+273.789400260" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.755730 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mk8n7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.755818 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5c79fb4878-hbkbp","openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 03:46:16 crc kubenswrapper[4772]: E0124 03:46:16.756032 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerName="oauth-openshift" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756051 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerName="oauth-openshift" Jan 24 03:46:16 crc kubenswrapper[4772]: E0124 03:46:16.756074 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" containerName="installer" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756085 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" containerName="installer" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756197 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="395b8998-d655-4164-b6ae-ba0fc8bd4434" containerName="installer" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756216 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" containerName="oauth-openshift" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756698 4772 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756874 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d65426d-9ece-4080-84e0-398c24a76c30" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.756790 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.761983 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-error\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762035 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-login\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762115 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-dir\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82b6\" (UniqueName: \"kubernetes.io/projected/3f76327d-74d4-43e9-a6a5-1cc0691d0489-kube-api-access-x82b6\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762189 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762267 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762297 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-session\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762404 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762464 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762507 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762531 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-policies\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762597 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762622 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.762645 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.766783 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.767112 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.767108 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.768462 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769132 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769274 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769419 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769453 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769467 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769574 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.769594 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.770358 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.780576 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.783368 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.790007 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.817473 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.817458195 podStartE2EDuration="20.817458195s" podCreationTimestamp="2026-01-24 03:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:46:16.814858952 +0000 UTC m=+273.851949687" watchObservedRunningTime="2026-01-24 03:46:16.817458195 +0000 UTC m=+273.854548920" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.820342 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.830139 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.836156 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863317 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863370 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-policies\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863392 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863431 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-error\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863465 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-login\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863491 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-dir\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863515 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82b6\" (UniqueName: \"kubernetes.io/projected/3f76327d-74d4-43e9-a6a5-1cc0691d0489-kube-api-access-x82b6\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863586 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-session\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.863620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.864187 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.864218 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-dir\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.864200 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-audit-policies\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.865296 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.865829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.872220 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.875484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.876268 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.882418 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.883534 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.883564 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-login\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.884812 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-system-session\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.885436 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-error\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.889274 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f76327d-74d4-43e9-a6a5-1cc0691d0489-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.893205 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82b6\" (UniqueName: \"kubernetes.io/projected/3f76327d-74d4-43e9-a6a5-1cc0691d0489-kube-api-access-x82b6\") pod \"oauth-openshift-5c79fb4878-hbkbp\" (UID: \"3f76327d-74d4-43e9-a6a5-1cc0691d0489\") " pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.897909 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 03:46:16 crc kubenswrapper[4772]: I0124 03:46:16.903196 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.067681 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.077082 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.158712 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.250052 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.263473 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.337849 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.350178 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.408864 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.468906 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.643156 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.669517 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed4e912-e375-41c4-a319-a360e33e8fde" path="/var/lib/kubelet/pods/2ed4e912-e375-41c4-a319-a360e33e8fde/volumes" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.744234 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.745645 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.787829 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.795377 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.821698 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.855878 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.887145 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.902352 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.945673 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 03:46:17 crc kubenswrapper[4772]: I0124 03:46:17.961837 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.171671 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.178512 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.205055 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.205152 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.221103 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.221956 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.337549 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.361485 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.377846 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.378814 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.473605 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.478048 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.535261 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.717290 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.776877 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.818479 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.864296 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.910080 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 03:46:18 crc kubenswrapper[4772]: I0124 03:46:18.917247 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.002201 4772 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.002448 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22" gracePeriod=5 Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.064160 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.146806 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.171701 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.191318 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.279395 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.338078 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.362065 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.393174 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.420043 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.644513 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.708694 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.728469 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.728531 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.749829 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.753385 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.855481 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.869457 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.869648 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.875624 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.904123 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.936004 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 03:46:19 crc kubenswrapper[4772]: I0124 03:46:19.989688 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5c79fb4878-hbkbp"] Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.051224 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.056363 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.086565 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.191101 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.267552 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.269988 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.302627 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.354091 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.420395 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.431932 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.433596 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.521285 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.561526 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.606890 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.654157 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.654652 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.671570 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.681040 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.754933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" event={"ID":"3f76327d-74d4-43e9-a6a5-1cc0691d0489","Type":"ContainerStarted","Data":"3183403eb7e34bae5cc6abf975377ccc2c4b52a2c9f629cc1d83b492224b0b6a"} Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.755039 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" event={"ID":"3f76327d-74d4-43e9-a6a5-1cc0691d0489","Type":"ContainerStarted","Data":"d65d9d66d6ca74ed556f35b022b7191fa197da5ab54a30db33643608832a8ecd"} Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.756103 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.792918 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" podStartSLOduration=46.792889745 podStartE2EDuration="46.792889745s" podCreationTimestamp="2026-01-24 03:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:46:20.784653894 +0000 UTC m=+277.821744619" watchObservedRunningTime="2026-01-24 03:46:20.792889745 +0000 UTC m=+277.829980510" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.861828 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5c79fb4878-hbkbp" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.913929 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.914037 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 03:46:20 crc kubenswrapper[4772]: I0124 03:46:20.937938 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.003460 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.006013 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.032272 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.107543 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.157976 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.168096 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.214862 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.216070 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.380358 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.414527 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.445797 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.526228 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.758501 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.819442 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 03:46:21 crc kubenswrapper[4772]: I0124 03:46:21.922581 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.006670 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.126131 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.127382 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.240039 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.259515 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.350586 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.388805 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.511089 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.576384 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.735825 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.740797 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.923429 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.962577 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.999329 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 03:46:22 crc kubenswrapper[4772]: I0124 03:46:22.999554 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.072878 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.152991 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.278844 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.486843 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.519096 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.623872 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.795887 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.805497 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 03:46:23 crc kubenswrapper[4772]: I0124 03:46:23.979660 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.223035 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.362997 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.584729 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.585081 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.644447 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.644640 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684216 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684287 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684303 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684360 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684379 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684418 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684492 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684709 4772 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684767 4772 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684784 4772 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.684797 4772 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.695669 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.701907 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.735447 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.742428 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.781823 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.782342 4772 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22" exitCode=137 Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.782408 4772 scope.go:117] "RemoveContainer" containerID="fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.782456 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.785941 4772 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.810971 4772 scope.go:117] "RemoveContainer" containerID="fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22" Jan 24 03:46:24 crc kubenswrapper[4772]: E0124 03:46:24.811451 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22\": container with ID starting with fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22 not found: ID does not exist" containerID="fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.811493 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22"} err="failed to get container status \"fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22\": rpc error: code = NotFound desc = could not find container \"fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22\": container with ID starting with fef382b9421db5b9f15b6bd293afca1bf6b36c4b50ddda59cd71f604bb8b5c22 not found: ID does not exist" Jan 24 03:46:24 crc kubenswrapper[4772]: I0124 03:46:24.983146 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.119936 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.296811 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.671272 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.671719 4772 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.687004 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.687064 4772 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5917ac28-6f25-436f-be60-3d394e1d733e" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.694099 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.694166 4772 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5917ac28-6f25-436f-be60-3d394e1d733e" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.823252 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 03:46:25 crc kubenswrapper[4772]: I0124 03:46:25.891507 4772 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 03:46:26 crc kubenswrapper[4772]: I0124 03:46:26.124954 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 03:46:40 crc kubenswrapper[4772]: I0124 03:46:40.904380 4772 generic.go:334] "Generic (PLEG): container finished" podID="e18f918d-3751-4397-8029-4b1a3bc87953" containerID="93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c" exitCode=0 Jan 24 03:46:40 crc kubenswrapper[4772]: I0124 03:46:40.904466 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerDied","Data":"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c"} Jan 24 03:46:40 crc kubenswrapper[4772]: I0124 03:46:40.905331 4772 scope.go:117] "RemoveContainer" containerID="93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c" Jan 24 03:46:41 crc kubenswrapper[4772]: I0124 03:46:41.917290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerStarted","Data":"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001"} Jan 24 03:46:41 crc kubenswrapper[4772]: I0124 03:46:41.918103 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:46:41 crc kubenswrapper[4772]: I0124 03:46:41.920714 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:46:43 crc kubenswrapper[4772]: I0124 03:46:43.470933 4772 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 24 03:46:44 crc kubenswrapper[4772]: I0124 03:46:44.639806 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:46:44 crc kubenswrapper[4772]: I0124 03:46:44.641365 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8nr2" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="registry-server" containerID="cri-o://d20cab397c4b49bed38271ef533647c6a39cfff66dae69c6f944272d00abaca6" gracePeriod=2 Jan 24 03:46:44 crc kubenswrapper[4772]: I0124 03:46:44.942435 4772 generic.go:334] "Generic (PLEG): container finished" podID="d124ff24-991c-4a60-997d-b899e8387e0d" containerID="d20cab397c4b49bed38271ef533647c6a39cfff66dae69c6f944272d00abaca6" exitCode=0 Jan 24 03:46:44 crc kubenswrapper[4772]: I0124 03:46:44.942501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerDied","Data":"d20cab397c4b49bed38271ef533647c6a39cfff66dae69c6f944272d00abaca6"} Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.162932 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.322287 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities\") pod \"d124ff24-991c-4a60-997d-b899e8387e0d\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.322361 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content\") pod \"d124ff24-991c-4a60-997d-b899e8387e0d\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.322419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b72h\" (UniqueName: \"kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h\") pod \"d124ff24-991c-4a60-997d-b899e8387e0d\" (UID: \"d124ff24-991c-4a60-997d-b899e8387e0d\") " Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.323546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities" (OuterVolumeSpecName: "utilities") pod "d124ff24-991c-4a60-997d-b899e8387e0d" (UID: "d124ff24-991c-4a60-997d-b899e8387e0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.340399 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h" (OuterVolumeSpecName: "kube-api-access-5b72h") pod "d124ff24-991c-4a60-997d-b899e8387e0d" (UID: "d124ff24-991c-4a60-997d-b899e8387e0d"). InnerVolumeSpecName "kube-api-access-5b72h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.424505 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b72h\" (UniqueName: \"kubernetes.io/projected/d124ff24-991c-4a60-997d-b899e8387e0d-kube-api-access-5b72h\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.424540 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.445610 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d124ff24-991c-4a60-997d-b899e8387e0d" (UID: "d124ff24-991c-4a60-997d-b899e8387e0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.526138 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d124ff24-991c-4a60-997d-b899e8387e0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.949952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8nr2" event={"ID":"d124ff24-991c-4a60-997d-b899e8387e0d","Type":"ContainerDied","Data":"b3d08b8bbaff4daad6e55792a58e9de90acbeb733c672d5ba6b9ff450c51ad8e"} Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.950014 4772 scope.go:117] "RemoveContainer" containerID="d20cab397c4b49bed38271ef533647c6a39cfff66dae69c6f944272d00abaca6" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.950137 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8nr2" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.965396 4772 scope.go:117] "RemoveContainer" containerID="0b318248366a83b067bd93e230709317a6d25e468321c02707f07616b977cdda" Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.973604 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.976184 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8nr2"] Jan 24 03:46:45 crc kubenswrapper[4772]: I0124 03:46:45.991881 4772 scope.go:117] "RemoveContainer" containerID="2ccca00b96e00bfab784be64d8ada3e730b790ae23d882d5789359a36e15cf93" Jan 24 03:46:47 crc kubenswrapper[4772]: I0124 03:46:47.664815 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" path="/var/lib/kubelet/pods/d124ff24-991c-4a60-997d-b899e8387e0d/volumes" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.167948 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqfs6"] Jan 24 03:47:00 crc kubenswrapper[4772]: E0124 03:47:00.168925 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="registry-server" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.168942 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="registry-server" Jan 24 03:47:00 crc kubenswrapper[4772]: E0124 03:47:00.168963 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="extract-content" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.168971 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="extract-content" Jan 24 03:47:00 crc kubenswrapper[4772]: E0124 03:47:00.168986 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.168994 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 03:47:00 crc kubenswrapper[4772]: E0124 03:47:00.169005 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="extract-utilities" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.169012 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="extract-utilities" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.169125 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d124ff24-991c-4a60-997d-b899e8387e0d" containerName="registry-server" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.169144 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.169605 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.183902 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqfs6"] Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.326139 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c84e7021-f209-4d4c-9093-9f1bdf86662f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.326471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-certificates\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.326619 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-bound-sa-token\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.326756 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c84e7021-f209-4d4c-9093-9f1bdf86662f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.326889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-tls\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.327014 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.327147 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxqx\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-kube-api-access-pgxqx\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.327259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-trusted-ca\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.351759 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-bound-sa-token\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428245 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c84e7021-f209-4d4c-9093-9f1bdf86662f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-tls\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgxqx\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-kube-api-access-pgxqx\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428317 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-trusted-ca\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c84e7021-f209-4d4c-9093-9f1bdf86662f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.428390 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-certificates\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.429496 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c84e7021-f209-4d4c-9093-9f1bdf86662f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.430108 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-certificates\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.430157 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84e7021-f209-4d4c-9093-9f1bdf86662f-trusted-ca\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.441514 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c84e7021-f209-4d4c-9093-9f1bdf86662f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.442258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-registry-tls\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.444629 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgxqx\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-kube-api-access-pgxqx\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.448173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84e7021-f209-4d4c-9093-9f1bdf86662f-bound-sa-token\") pod \"image-registry-66df7c8f76-dqfs6\" (UID: \"c84e7021-f209-4d4c-9093-9f1bdf86662f\") " pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.506411 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:00 crc kubenswrapper[4772]: I0124 03:47:00.966185 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dqfs6"] Jan 24 03:47:00 crc kubenswrapper[4772]: W0124 03:47:00.977510 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc84e7021_f209_4d4c_9093_9f1bdf86662f.slice/crio-0c3b619f8ad03421ebf3278557c9ea8b0889c270afbb778792138a9cb7e3f53a WatchSource:0}: Error finding container 0c3b619f8ad03421ebf3278557c9ea8b0889c270afbb778792138a9cb7e3f53a: Status 404 returned error can't find the container with id 0c3b619f8ad03421ebf3278557c9ea8b0889c270afbb778792138a9cb7e3f53a Jan 24 03:47:01 crc kubenswrapper[4772]: I0124 03:47:01.050567 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" event={"ID":"c84e7021-f209-4d4c-9093-9f1bdf86662f","Type":"ContainerStarted","Data":"0c3b619f8ad03421ebf3278557c9ea8b0889c270afbb778792138a9cb7e3f53a"} Jan 24 03:47:02 crc kubenswrapper[4772]: I0124 03:47:02.056918 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" event={"ID":"c84e7021-f209-4d4c-9093-9f1bdf86662f","Type":"ContainerStarted","Data":"1ef5aca2675e3dda04230e2289ec0e2eca3cf092e331ea6dd0eb0e48121647b0"} Jan 24 03:47:02 crc kubenswrapper[4772]: I0124 03:47:02.057088 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:02 crc kubenswrapper[4772]: I0124 03:47:02.093997 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" podStartSLOduration=2.093983102 podStartE2EDuration="2.093983102s" podCreationTimestamp="2026-01-24 03:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:47:02.090310199 +0000 UTC m=+319.127400924" watchObservedRunningTime="2026-01-24 03:47:02.093983102 +0000 UTC m=+319.131073817" Jan 24 03:47:20 crc kubenswrapper[4772]: I0124 03:47:20.518194 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dqfs6" Jan 24 03:47:20 crc kubenswrapper[4772]: I0124 03:47:20.609276 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.497331 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.501267 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jgctw" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="registry-server" containerID="cri-o://518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae" gracePeriod=30 Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.503411 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.503725 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j67dm" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="registry-server" containerID="cri-o://2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b" gracePeriod=30 Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.514019 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.514272 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" containerID="cri-o://3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001" gracePeriod=30 Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.528158 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v4b5k"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.529127 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.539858 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.540150 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lcjhk" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="registry-server" containerID="cri-o://a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b" gracePeriod=30 Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.543860 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.544214 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkqlj" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="registry-server" containerID="cri-o://7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e" gracePeriod=30 Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.548666 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v4b5k"] Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.723916 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.723985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pqp\" (UniqueName: \"kubernetes.io/projected/b86459ea-dd22-4a69-8561-379087f99c80-kube-api-access-p4pqp\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.725293 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.826876 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.826933 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pqp\" (UniqueName: \"kubernetes.io/projected/b86459ea-dd22-4a69-8561-379087f99c80-kube-api-access-p4pqp\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.826999 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.832381 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.833151 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b86459ea-dd22-4a69-8561-379087f99c80-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.848431 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pqp\" (UniqueName: \"kubernetes.io/projected/b86459ea-dd22-4a69-8561-379087f99c80-kube-api-access-p4pqp\") pod \"marketplace-operator-79b997595-v4b5k\" (UID: \"b86459ea-dd22-4a69-8561-379087f99c80\") " pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.867711 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.894919 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.927566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content\") pod \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.927688 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjs8h\" (UniqueName: \"kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h\") pod \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.927763 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities\") pod \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\" (UID: \"f65f1015-89a0-482e-87d3-f2b2e2149e2d\") " Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.928801 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities" (OuterVolumeSpecName: "utilities") pod "f65f1015-89a0-482e-87d3-f2b2e2149e2d" (UID: "f65f1015-89a0-482e-87d3-f2b2e2149e2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.944620 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h" (OuterVolumeSpecName: "kube-api-access-sjs8h") pod "f65f1015-89a0-482e-87d3-f2b2e2149e2d" (UID: "f65f1015-89a0-482e-87d3-f2b2e2149e2d"). InnerVolumeSpecName "kube-api-access-sjs8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.973730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f65f1015-89a0-482e-87d3-f2b2e2149e2d" (UID: "f65f1015-89a0-482e-87d3-f2b2e2149e2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.981412 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.985706 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.988863 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:47:32 crc kubenswrapper[4772]: I0124 03:47:32.997869 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029270 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54z9x\" (UniqueName: \"kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x\") pod \"022f55bb-f179-48cc-ae69-a9936070e3b7\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029314 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content\") pod \"022f55bb-f179-48cc-ae69-a9936070e3b7\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca\") pod \"e18f918d-3751-4397-8029-4b1a3bc87953\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029365 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities\") pod \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029387 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities\") pod \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029409 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities\") pod \"022f55bb-f179-48cc-ae69-a9936070e3b7\" (UID: \"022f55bb-f179-48cc-ae69-a9936070e3b7\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq778\" (UniqueName: \"kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778\") pod \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029471 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics\") pod \"e18f918d-3751-4397-8029-4b1a3bc87953\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029489 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content\") pod \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\" (UID: \"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content\") pod \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hwwf\" (UniqueName: \"kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf\") pod \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\" (UID: \"6cd7a1a3-2773-4ffc-9cef-8015556b3b33\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029590 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgm9\" (UniqueName: \"kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9\") pod \"e18f918d-3751-4397-8029-4b1a3bc87953\" (UID: \"e18f918d-3751-4397-8029-4b1a3bc87953\") " Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029822 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029834 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65f1015-89a0-482e-87d3-f2b2e2149e2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.029844 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjs8h\" (UniqueName: \"kubernetes.io/projected/f65f1015-89a0-482e-87d3-f2b2e2149e2d-kube-api-access-sjs8h\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.031289 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities" (OuterVolumeSpecName: "utilities") pod "022f55bb-f179-48cc-ae69-a9936070e3b7" (UID: "022f55bb-f179-48cc-ae69-a9936070e3b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.032403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e18f918d-3751-4397-8029-4b1a3bc87953" (UID: "e18f918d-3751-4397-8029-4b1a3bc87953"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.035401 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9" (OuterVolumeSpecName: "kube-api-access-dzgm9") pod "e18f918d-3751-4397-8029-4b1a3bc87953" (UID: "e18f918d-3751-4397-8029-4b1a3bc87953"). InnerVolumeSpecName "kube-api-access-dzgm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.036287 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities" (OuterVolumeSpecName: "utilities") pod "6cd7a1a3-2773-4ffc-9cef-8015556b3b33" (UID: "6cd7a1a3-2773-4ffc-9cef-8015556b3b33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.036819 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x" (OuterVolumeSpecName: "kube-api-access-54z9x") pod "022f55bb-f179-48cc-ae69-a9936070e3b7" (UID: "022f55bb-f179-48cc-ae69-a9936070e3b7"). InnerVolumeSpecName "kube-api-access-54z9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.037439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities" (OuterVolumeSpecName: "utilities") pod "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" (UID: "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.041661 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e18f918d-3751-4397-8029-4b1a3bc87953" (UID: "e18f918d-3751-4397-8029-4b1a3bc87953"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.041755 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778" (OuterVolumeSpecName: "kube-api-access-zq778") pod "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" (UID: "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1"). InnerVolumeSpecName "kube-api-access-zq778". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.044791 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf" (OuterVolumeSpecName: "kube-api-access-9hwwf") pod "6cd7a1a3-2773-4ffc-9cef-8015556b3b33" (UID: "6cd7a1a3-2773-4ffc-9cef-8015556b3b33"). InnerVolumeSpecName "kube-api-access-9hwwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.072974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" (UID: "1276fcbc-1783-4f3a-8ee0-8be45b19d4b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.110631 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cd7a1a3-2773-4ffc-9cef-8015556b3b33" (UID: "6cd7a1a3-2773-4ffc-9cef-8015556b3b33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130649 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgm9\" (UniqueName: \"kubernetes.io/projected/e18f918d-3751-4397-8029-4b1a3bc87953-kube-api-access-dzgm9\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130680 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54z9x\" (UniqueName: \"kubernetes.io/projected/022f55bb-f179-48cc-ae69-a9936070e3b7-kube-api-access-54z9x\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130690 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130702 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130711 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130719 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130728 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq778\" (UniqueName: \"kubernetes.io/projected/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-kube-api-access-zq778\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130753 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e18f918d-3751-4397-8029-4b1a3bc87953-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130763 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130771 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.130780 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hwwf\" (UniqueName: \"kubernetes.io/projected/6cd7a1a3-2773-4ffc-9cef-8015556b3b33-kube-api-access-9hwwf\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.165787 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "022f55bb-f179-48cc-ae69-a9936070e3b7" (UID: "022f55bb-f179-48cc-ae69-a9936070e3b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.232134 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022f55bb-f179-48cc-ae69-a9936070e3b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.274085 4772 generic.go:334] "Generic (PLEG): container finished" podID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerID="a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b" exitCode=0 Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.274116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerDied","Data":"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.274155 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcjhk" event={"ID":"1276fcbc-1783-4f3a-8ee0-8be45b19d4b1","Type":"ContainerDied","Data":"ca2029868590da0fa00e6e773eaac728b8f35ace003ba91a399ab429aea39450"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.274171 4772 scope.go:117] "RemoveContainer" containerID="a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.274202 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcjhk" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.278782 4772 generic.go:334] "Generic (PLEG): container finished" podID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerID="518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae" exitCode=0 Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.278859 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerDied","Data":"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.278888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgctw" event={"ID":"f65f1015-89a0-482e-87d3-f2b2e2149e2d","Type":"ContainerDied","Data":"0faac75f23d5e2504113924538452f73b0f9283f89ae4747c782458eaf027a00"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.278887 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgctw" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.280471 4772 generic.go:334] "Generic (PLEG): container finished" podID="e18f918d-3751-4397-8029-4b1a3bc87953" containerID="3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001" exitCode=0 Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.280569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerDied","Data":"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.280598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" event={"ID":"e18f918d-3751-4397-8029-4b1a3bc87953","Type":"ContainerDied","Data":"008d7d50967c24e0dc8dc1d625108fcba355ceed6bb18decd3dc600258f49af6"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.280649 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8vwrd" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.286849 4772 generic.go:334] "Generic (PLEG): container finished" podID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerID="7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e" exitCode=0 Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.286931 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerDied","Data":"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.286957 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkqlj" event={"ID":"022f55bb-f179-48cc-ae69-a9936070e3b7","Type":"ContainerDied","Data":"c811f2fd6c738b78c40d63171e08bf09a181ffd4b96e773519d5578373200693"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.287359 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkqlj" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.289344 4772 generic.go:334] "Generic (PLEG): container finished" podID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerID="2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b" exitCode=0 Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.289387 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerDied","Data":"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.289413 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j67dm" event={"ID":"6cd7a1a3-2773-4ffc-9cef-8015556b3b33","Type":"ContainerDied","Data":"792ee49ebe0fbee04d317205b0605fba443804dbff5cc7a055c580483bb486bf"} Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.289390 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j67dm" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.304799 4772 scope.go:117] "RemoveContainer" containerID="369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.333046 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.338757 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jgctw"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.338812 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.370108 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8vwrd"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.374632 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.379127 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkqlj"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.385777 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.402041 4772 scope.go:117] "RemoveContainer" containerID="8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.402152 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j67dm"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.402246 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v4b5k"] Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.405286 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:47:33 crc kubenswrapper[4772]: W0124 03:47:33.408420 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86459ea_dd22_4a69_8561_379087f99c80.slice/crio-540934c43facec2c6b9447bf533ff5e19da567772ea3c430ed58eba89533221a WatchSource:0}: Error finding container 540934c43facec2c6b9447bf533ff5e19da567772ea3c430ed58eba89533221a: Status 404 returned error can't find the container with id 540934c43facec2c6b9447bf533ff5e19da567772ea3c430ed58eba89533221a Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.415623 4772 scope.go:117] "RemoveContainer" containerID="a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.416007 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b\": container with ID starting with a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b not found: ID does not exist" containerID="a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.416040 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b"} err="failed to get container status \"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b\": rpc error: code = NotFound desc = could not find container \"a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b\": container with ID starting with a2be7a7afd6a047bd957cfcac7eefdc61ba130d3c0cfbb37a8f9897e4742377b not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.416095 4772 scope.go:117] "RemoveContainer" containerID="369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.416478 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70\": container with ID starting with 369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70 not found: ID does not exist" containerID="369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.416528 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70"} err="failed to get container status \"369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70\": rpc error: code = NotFound desc = could not find container \"369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70\": container with ID starting with 369a4afbe2b0b415a53510e004dfbbd2632c8c4e5887f823068107f7abf0dc70 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.416564 4772 scope.go:117] "RemoveContainer" containerID="8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.417716 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcjhk"] Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.418724 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56\": container with ID starting with 8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56 not found: ID does not exist" containerID="8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.418893 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56"} err="failed to get container status \"8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56\": rpc error: code = NotFound desc = could not find container \"8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56\": container with ID starting with 8d4e92938a389fcaa8b1b03d2e22c5114dfb88fe8ddce3b8148a3e10f1e78c56 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.418924 4772 scope.go:117] "RemoveContainer" containerID="518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.442182 4772 scope.go:117] "RemoveContainer" containerID="80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.458240 4772 scope.go:117] "RemoveContainer" containerID="3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.472950 4772 scope.go:117] "RemoveContainer" containerID="518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.473254 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae\": container with ID starting with 518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae not found: ID does not exist" containerID="518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.473291 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae"} err="failed to get container status \"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae\": rpc error: code = NotFound desc = could not find container \"518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae\": container with ID starting with 518a1fbeb547e7a610ead41d16be2f4f12383fcd48c039f1ed258f71070f8cae not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.473317 4772 scope.go:117] "RemoveContainer" containerID="80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.473858 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64\": container with ID starting with 80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64 not found: ID does not exist" containerID="80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.473908 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64"} err="failed to get container status \"80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64\": rpc error: code = NotFound desc = could not find container \"80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64\": container with ID starting with 80a6f2b51cceff77c4561366c8271e4131677aba779dd1b314586461a10cce64 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.473945 4772 scope.go:117] "RemoveContainer" containerID="3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.474298 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6\": container with ID starting with 3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6 not found: ID does not exist" containerID="3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.474331 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6"} err="failed to get container status \"3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6\": rpc error: code = NotFound desc = could not find container \"3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6\": container with ID starting with 3951259bad8f55a152a5f740b2acb484c1c5871a0ca2a9f57d3243330dc695d6 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.474356 4772 scope.go:117] "RemoveContainer" containerID="3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.497045 4772 scope.go:117] "RemoveContainer" containerID="93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.515272 4772 scope.go:117] "RemoveContainer" containerID="3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.516073 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001\": container with ID starting with 3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001 not found: ID does not exist" containerID="3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.516114 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001"} err="failed to get container status \"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001\": rpc error: code = NotFound desc = could not find container \"3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001\": container with ID starting with 3461cd30ec8c83d5c64d694ad0f81d2c49bc823ee40c5798f4b86e7b62e90001 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.516146 4772 scope.go:117] "RemoveContainer" containerID="93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.516542 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c\": container with ID starting with 93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c not found: ID does not exist" containerID="93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.516577 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c"} err="failed to get container status \"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c\": rpc error: code = NotFound desc = could not find container \"93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c\": container with ID starting with 93c31e366a5c974fa84d853717b4f91c0b98e4c82d4f2d4a7c22e43964181f3c not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.516605 4772 scope.go:117] "RemoveContainer" containerID="7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.529704 4772 scope.go:117] "RemoveContainer" containerID="68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.550351 4772 scope.go:117] "RemoveContainer" containerID="d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.566525 4772 scope.go:117] "RemoveContainer" containerID="7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.566900 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e\": container with ID starting with 7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e not found: ID does not exist" containerID="7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.566932 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e"} err="failed to get container status \"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e\": rpc error: code = NotFound desc = could not find container \"7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e\": container with ID starting with 7b7cd0e75d93afd45fe8676c371249e74b6268152b08aa6a65d27bfc15d62c7e not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.566955 4772 scope.go:117] "RemoveContainer" containerID="68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.567386 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753\": container with ID starting with 68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753 not found: ID does not exist" containerID="68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.567408 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753"} err="failed to get container status \"68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753\": rpc error: code = NotFound desc = could not find container \"68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753\": container with ID starting with 68072ae9687ebafa171151fc524b8ea15f96ce9208643d09147f094526be9753 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.567423 4772 scope.go:117] "RemoveContainer" containerID="d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.567677 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7\": container with ID starting with d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7 not found: ID does not exist" containerID="d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.567699 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7"} err="failed to get container status \"d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7\": rpc error: code = NotFound desc = could not find container \"d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7\": container with ID starting with d299bebfe48b96f2583858ed00e011f21b325cc178ddf5c53be485d8d75074c7 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.567712 4772 scope.go:117] "RemoveContainer" containerID="2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.587768 4772 scope.go:117] "RemoveContainer" containerID="8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.603997 4772 scope.go:117] "RemoveContainer" containerID="203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.619329 4772 scope.go:117] "RemoveContainer" containerID="2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.621291 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b\": container with ID starting with 2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b not found: ID does not exist" containerID="2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.621346 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b"} err="failed to get container status \"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b\": rpc error: code = NotFound desc = could not find container \"2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b\": container with ID starting with 2ce09d1c235e019e1ce76630d1111fa956318aa0a054f9d6ddbc0a6113df588b not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.621382 4772 scope.go:117] "RemoveContainer" containerID="8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.621756 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6\": container with ID starting with 8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6 not found: ID does not exist" containerID="8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.621784 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6"} err="failed to get container status \"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6\": rpc error: code = NotFound desc = could not find container \"8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6\": container with ID starting with 8f5ddca42d7ed63dfce94443f668c1e400b7c8bb57a4f9467dd4d4e0e7c6b9c6 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.621806 4772 scope.go:117] "RemoveContainer" containerID="203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90" Jan 24 03:47:33 crc kubenswrapper[4772]: E0124 03:47:33.622085 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90\": container with ID starting with 203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90 not found: ID does not exist" containerID="203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.622132 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90"} err="failed to get container status \"203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90\": rpc error: code = NotFound desc = could not find container \"203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90\": container with ID starting with 203e3e64281d72c1b68fb6788fd6c4641f82c14f3eef87d042bce699f0c3fd90 not found: ID does not exist" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.666229 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" path="/var/lib/kubelet/pods/022f55bb-f179-48cc-ae69-a9936070e3b7/volumes" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.666843 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" path="/var/lib/kubelet/pods/1276fcbc-1783-4f3a-8ee0-8be45b19d4b1/volumes" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.667478 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" path="/var/lib/kubelet/pods/6cd7a1a3-2773-4ffc-9cef-8015556b3b33/volumes" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.668512 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" path="/var/lib/kubelet/pods/e18f918d-3751-4397-8029-4b1a3bc87953/volumes" Jan 24 03:47:33 crc kubenswrapper[4772]: I0124 03:47:33.669002 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" path="/var/lib/kubelet/pods/f65f1015-89a0-482e-87d3-f2b2e2149e2d/volumes" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.298287 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" event={"ID":"b86459ea-dd22-4a69-8561-379087f99c80","Type":"ContainerStarted","Data":"07de8d6aeb9a1ba2362c74f0b4476eda087c703abe4596d6f9e7f931e11bc6d2"} Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.298340 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" event={"ID":"b86459ea-dd22-4a69-8561-379087f99c80","Type":"ContainerStarted","Data":"540934c43facec2c6b9447bf533ff5e19da567772ea3c430ed58eba89533221a"} Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.298723 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.302688 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.343255 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v4b5k" podStartSLOduration=2.343238622 podStartE2EDuration="2.343238622s" podCreationTimestamp="2026-01-24 03:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:47:34.32361756 +0000 UTC m=+351.360708285" watchObservedRunningTime="2026-01-24 03:47:34.343238622 +0000 UTC m=+351.380329337" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.717816 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4m4h"] Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718112 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718132 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718154 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718166 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718191 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718204 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718220 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718232 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718252 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718265 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718280 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718294 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718308 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718320 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718335 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718347 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="extract-utilities" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718366 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718378 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718393 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718405 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718423 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718435 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718452 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718464 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718478 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718490 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="extract-content" Jan 24 03:47:34 crc kubenswrapper[4772]: E0124 03:47:34.718511 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718523 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718682 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718701 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f1015-89a0-482e-87d3-f2b2e2149e2d" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718715 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="022f55bb-f179-48cc-ae69-a9936070e3b7" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718731 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18f918d-3751-4397-8029-4b1a3bc87953" containerName="marketplace-operator" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718775 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd7a1a3-2773-4ffc-9cef-8015556b3b33" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.718795 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1276fcbc-1783-4f3a-8ee0-8be45b19d4b1" containerName="registry-server" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.719954 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.726405 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.734207 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4m4h"] Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.859009 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-utilities\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.859066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-catalog-content\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.859118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lflh\" (UniqueName: \"kubernetes.io/projected/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-kube-api-access-2lflh\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.917782 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rmps9"] Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.918847 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.921231 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.929219 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmps9"] Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.960509 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-utilities\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.960549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-catalog-content\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.960591 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lflh\" (UniqueName: \"kubernetes.io/projected/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-kube-api-access-2lflh\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.961042 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-catalog-content\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.961324 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-utilities\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:34 crc kubenswrapper[4772]: I0124 03:47:34.983046 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lflh\" (UniqueName: \"kubernetes.io/projected/0f6ffb4d-0069-4480-a6e5-b2e8d2a85521-kube-api-access-2lflh\") pod \"community-operators-b4m4h\" (UID: \"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521\") " pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.046788 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.061936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slc8s\" (UniqueName: \"kubernetes.io/projected/0727c90d-0963-4183-8a91-b8e1b8b85b14-kube-api-access-slc8s\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.061993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-utilities\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.062024 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-catalog-content\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.163434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slc8s\" (UniqueName: \"kubernetes.io/projected/0727c90d-0963-4183-8a91-b8e1b8b85b14-kube-api-access-slc8s\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.163493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-utilities\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.163520 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-catalog-content\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.163952 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-catalog-content\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.165065 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0727c90d-0963-4183-8a91-b8e1b8b85b14-utilities\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.185285 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slc8s\" (UniqueName: \"kubernetes.io/projected/0727c90d-0963-4183-8a91-b8e1b8b85b14-kube-api-access-slc8s\") pod \"redhat-marketplace-rmps9\" (UID: \"0727c90d-0963-4183-8a91-b8e1b8b85b14\") " pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.243813 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.443176 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4m4h"] Jan 24 03:47:35 crc kubenswrapper[4772]: W0124 03:47:35.450241 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f6ffb4d_0069_4480_a6e5_b2e8d2a85521.slice/crio-b5a585da2fcd8ec2d00093e70accd732a5fcc0da89df10c609a71fcbbe5ea876 WatchSource:0}: Error finding container b5a585da2fcd8ec2d00093e70accd732a5fcc0da89df10c609a71fcbbe5ea876: Status 404 returned error can't find the container with id b5a585da2fcd8ec2d00093e70accd732a5fcc0da89df10c609a71fcbbe5ea876 Jan 24 03:47:35 crc kubenswrapper[4772]: I0124 03:47:35.686895 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmps9"] Jan 24 03:47:35 crc kubenswrapper[4772]: W0124 03:47:35.726263 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0727c90d_0963_4183_8a91_b8e1b8b85b14.slice/crio-84caf2723693edf4af93b06f8032c7f6ca15cd63e151166d0446e0049e0e8376 WatchSource:0}: Error finding container 84caf2723693edf4af93b06f8032c7f6ca15cd63e151166d0446e0049e0e8376: Status 404 returned error can't find the container with id 84caf2723693edf4af93b06f8032c7f6ca15cd63e151166d0446e0049e0e8376 Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.325706 4772 generic.go:334] "Generic (PLEG): container finished" podID="0f6ffb4d-0069-4480-a6e5-b2e8d2a85521" containerID="3950e819aa738493fcb174e4c416006d151fd83ede96bab85a91956508485947" exitCode=0 Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.325803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4m4h" event={"ID":"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521","Type":"ContainerDied","Data":"3950e819aa738493fcb174e4c416006d151fd83ede96bab85a91956508485947"} Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.326107 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4m4h" event={"ID":"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521","Type":"ContainerStarted","Data":"b5a585da2fcd8ec2d00093e70accd732a5fcc0da89df10c609a71fcbbe5ea876"} Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.329340 4772 generic.go:334] "Generic (PLEG): container finished" podID="0727c90d-0963-4183-8a91-b8e1b8b85b14" containerID="75faeb3f0c995e3694dfb39ff7b7c13a9bb2214242d0ac07f940b8a7cc8fcf40" exitCode=0 Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.329428 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmps9" event={"ID":"0727c90d-0963-4183-8a91-b8e1b8b85b14","Type":"ContainerDied","Data":"75faeb3f0c995e3694dfb39ff7b7c13a9bb2214242d0ac07f940b8a7cc8fcf40"} Jan 24 03:47:36 crc kubenswrapper[4772]: I0124 03:47:36.329500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmps9" event={"ID":"0727c90d-0963-4183-8a91-b8e1b8b85b14","Type":"ContainerStarted","Data":"84caf2723693edf4af93b06f8032c7f6ca15cd63e151166d0446e0049e0e8376"} Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.113417 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qmzlq"] Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.115032 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.117606 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.128805 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmzlq"] Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.293483 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-catalog-content\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.293560 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-utilities\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.293581 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dpw\" (UniqueName: \"kubernetes.io/projected/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-kube-api-access-66dpw\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.316271 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dk99t"] Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.318417 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.320677 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.338542 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dk99t"] Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.342727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4m4h" event={"ID":"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521","Type":"ContainerStarted","Data":"bcc54f32ff117683aa2923415aee7afa2780aba0460c222a5306c2a365c5bfd0"} Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.346068 4772 generic.go:334] "Generic (PLEG): container finished" podID="0727c90d-0963-4183-8a91-b8e1b8b85b14" containerID="7f03f9a51c0aa63d8f453f9d5e98fecb867370b3f175459b528ebfd9881c6e31" exitCode=0 Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.346116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmps9" event={"ID":"0727c90d-0963-4183-8a91-b8e1b8b85b14","Type":"ContainerDied","Data":"7f03f9a51c0aa63d8f453f9d5e98fecb867370b3f175459b528ebfd9881c6e31"} Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.395003 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-catalog-content\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.395190 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-utilities\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.395272 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dpw\" (UniqueName: \"kubernetes.io/projected/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-kube-api-access-66dpw\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.395493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-catalog-content\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.396008 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-utilities\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.417589 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dpw\" (UniqueName: \"kubernetes.io/projected/e9a42743-0d64-48b8-bcb1-ed7ce83864a9-kube-api-access-66dpw\") pod \"certified-operators-qmzlq\" (UID: \"e9a42743-0d64-48b8-bcb1-ed7ce83864a9\") " pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.429892 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.496932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkw6h\" (UniqueName: \"kubernetes.io/projected/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-kube-api-access-fkw6h\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.497075 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-utilities\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.497123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-catalog-content\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.598850 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-utilities\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.598931 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-catalog-content\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.598985 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkw6h\" (UniqueName: \"kubernetes.io/projected/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-kube-api-access-fkw6h\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.600633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-utilities\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.600679 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-catalog-content\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.619866 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkw6h\" (UniqueName: \"kubernetes.io/projected/f3bdea06-126d-4d1a-aeac-3b620cccaa0d-kube-api-access-fkw6h\") pod \"redhat-operators-dk99t\" (UID: \"f3bdea06-126d-4d1a-aeac-3b620cccaa0d\") " pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.619951 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmzlq"] Jan 24 03:47:37 crc kubenswrapper[4772]: I0124 03:47:37.629958 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.079055 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dk99t"] Jan 24 03:47:38 crc kubenswrapper[4772]: W0124 03:47:38.086673 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3bdea06_126d_4d1a_aeac_3b620cccaa0d.slice/crio-eb1281f68b5f8d63e8aa8018259bec36ba71cf0c5cbad2a489dac6e6f8ad0804 WatchSource:0}: Error finding container eb1281f68b5f8d63e8aa8018259bec36ba71cf0c5cbad2a489dac6e6f8ad0804: Status 404 returned error can't find the container with id eb1281f68b5f8d63e8aa8018259bec36ba71cf0c5cbad2a489dac6e6f8ad0804 Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.355437 4772 generic.go:334] "Generic (PLEG): container finished" podID="f3bdea06-126d-4d1a-aeac-3b620cccaa0d" containerID="ecf8d8c4cd08a4911413414a5d0435c377456fed8c1aa01e5cf449d9e12fd1d6" exitCode=0 Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.355509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dk99t" event={"ID":"f3bdea06-126d-4d1a-aeac-3b620cccaa0d","Type":"ContainerDied","Data":"ecf8d8c4cd08a4911413414a5d0435c377456fed8c1aa01e5cf449d9e12fd1d6"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.355538 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dk99t" event={"ID":"f3bdea06-126d-4d1a-aeac-3b620cccaa0d","Type":"ContainerStarted","Data":"eb1281f68b5f8d63e8aa8018259bec36ba71cf0c5cbad2a489dac6e6f8ad0804"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.359417 4772 generic.go:334] "Generic (PLEG): container finished" podID="e9a42743-0d64-48b8-bcb1-ed7ce83864a9" containerID="68e59b50385ac9ed6223a216707d282a62251dce91b5397bfe55c84f37037e2a" exitCode=0 Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.359620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmzlq" event={"ID":"e9a42743-0d64-48b8-bcb1-ed7ce83864a9","Type":"ContainerDied","Data":"68e59b50385ac9ed6223a216707d282a62251dce91b5397bfe55c84f37037e2a"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.359652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmzlq" event={"ID":"e9a42743-0d64-48b8-bcb1-ed7ce83864a9","Type":"ContainerStarted","Data":"9cad89b9616750a590b74b4e3a03e285035c5de562f823c6499303a89bf46211"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.375831 4772 generic.go:334] "Generic (PLEG): container finished" podID="0f6ffb4d-0069-4480-a6e5-b2e8d2a85521" containerID="bcc54f32ff117683aa2923415aee7afa2780aba0460c222a5306c2a365c5bfd0" exitCode=0 Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.375964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4m4h" event={"ID":"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521","Type":"ContainerDied","Data":"bcc54f32ff117683aa2923415aee7afa2780aba0460c222a5306c2a365c5bfd0"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.385064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmps9" event={"ID":"0727c90d-0963-4183-8a91-b8e1b8b85b14","Type":"ContainerStarted","Data":"7bd0bbd3bdfca4fbfae33f8dc24c64257292edd0cf4a843af2cde1bc9d7a01af"} Jan 24 03:47:38 crc kubenswrapper[4772]: I0124 03:47:38.466681 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmps9" podStartSLOduration=3.025994444 podStartE2EDuration="4.466664929s" podCreationTimestamp="2026-01-24 03:47:34 +0000 UTC" firstStartedPulling="2026-01-24 03:47:36.330448868 +0000 UTC m=+353.367539593" lastFinishedPulling="2026-01-24 03:47:37.771119353 +0000 UTC m=+354.808210078" observedRunningTime="2026-01-24 03:47:38.463176111 +0000 UTC m=+355.500266836" watchObservedRunningTime="2026-01-24 03:47:38.466664929 +0000 UTC m=+355.503755644" Jan 24 03:47:39 crc kubenswrapper[4772]: I0124 03:47:39.393858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4m4h" event={"ID":"0f6ffb4d-0069-4480-a6e5-b2e8d2a85521","Type":"ContainerStarted","Data":"f4113c29520452c44a5486ff6af3ff4beabe0d370988039c8b74745a874c8478"} Jan 24 03:47:39 crc kubenswrapper[4772]: I0124 03:47:39.397082 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dk99t" event={"ID":"f3bdea06-126d-4d1a-aeac-3b620cccaa0d","Type":"ContainerStarted","Data":"0baf33dd9ec37f9ff67a6ab83cbc4ccc62c8f43a8fd243f741e090bf6d4e09e1"} Jan 24 03:47:39 crc kubenswrapper[4772]: I0124 03:47:39.399234 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmzlq" event={"ID":"e9a42743-0d64-48b8-bcb1-ed7ce83864a9","Type":"ContainerStarted","Data":"dba6b6d46616024961420b043fef7d2effd954efb66d8b1ff555d3111d954dba"} Jan 24 03:47:39 crc kubenswrapper[4772]: I0124 03:47:39.418100 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4m4h" podStartSLOduration=2.95036129 podStartE2EDuration="5.418069992s" podCreationTimestamp="2026-01-24 03:47:34 +0000 UTC" firstStartedPulling="2026-01-24 03:47:36.327514716 +0000 UTC m=+353.364605441" lastFinishedPulling="2026-01-24 03:47:38.795223418 +0000 UTC m=+355.832314143" observedRunningTime="2026-01-24 03:47:39.415373356 +0000 UTC m=+356.452464081" watchObservedRunningTime="2026-01-24 03:47:39.418069992 +0000 UTC m=+356.455160717" Jan 24 03:47:40 crc kubenswrapper[4772]: I0124 03:47:40.407404 4772 generic.go:334] "Generic (PLEG): container finished" podID="f3bdea06-126d-4d1a-aeac-3b620cccaa0d" containerID="0baf33dd9ec37f9ff67a6ab83cbc4ccc62c8f43a8fd243f741e090bf6d4e09e1" exitCode=0 Jan 24 03:47:40 crc kubenswrapper[4772]: I0124 03:47:40.407483 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dk99t" event={"ID":"f3bdea06-126d-4d1a-aeac-3b620cccaa0d","Type":"ContainerDied","Data":"0baf33dd9ec37f9ff67a6ab83cbc4ccc62c8f43a8fd243f741e090bf6d4e09e1"} Jan 24 03:47:40 crc kubenswrapper[4772]: I0124 03:47:40.409435 4772 generic.go:334] "Generic (PLEG): container finished" podID="e9a42743-0d64-48b8-bcb1-ed7ce83864a9" containerID="dba6b6d46616024961420b043fef7d2effd954efb66d8b1ff555d3111d954dba" exitCode=0 Jan 24 03:47:40 crc kubenswrapper[4772]: I0124 03:47:40.410833 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmzlq" event={"ID":"e9a42743-0d64-48b8-bcb1-ed7ce83864a9","Type":"ContainerDied","Data":"dba6b6d46616024961420b043fef7d2effd954efb66d8b1ff555d3111d954dba"} Jan 24 03:47:41 crc kubenswrapper[4772]: I0124 03:47:41.415803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dk99t" event={"ID":"f3bdea06-126d-4d1a-aeac-3b620cccaa0d","Type":"ContainerStarted","Data":"e63125b29801bdd63699a4ff2f73d9aea0cf220092304868db2e687da70a7cf3"} Jan 24 03:47:41 crc kubenswrapper[4772]: I0124 03:47:41.418167 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmzlq" event={"ID":"e9a42743-0d64-48b8-bcb1-ed7ce83864a9","Type":"ContainerStarted","Data":"ca9274686e714560710cf45b7ff9a344742aa6680ff57ad29bb9f55d2c9ae41f"} Jan 24 03:47:41 crc kubenswrapper[4772]: I0124 03:47:41.460626 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qmzlq" podStartSLOduration=1.9931916680000001 podStartE2EDuration="4.460608201s" podCreationTimestamp="2026-01-24 03:47:37 +0000 UTC" firstStartedPulling="2026-01-24 03:47:38.363042908 +0000 UTC m=+355.400133633" lastFinishedPulling="2026-01-24 03:47:40.830459441 +0000 UTC m=+357.867550166" observedRunningTime="2026-01-24 03:47:41.458032289 +0000 UTC m=+358.495123034" watchObservedRunningTime="2026-01-24 03:47:41.460608201 +0000 UTC m=+358.497698926" Jan 24 03:47:41 crc kubenswrapper[4772]: I0124 03:47:41.462560 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dk99t" podStartSLOduration=1.963687959 podStartE2EDuration="4.462554466s" podCreationTimestamp="2026-01-24 03:47:37 +0000 UTC" firstStartedPulling="2026-01-24 03:47:38.357448121 +0000 UTC m=+355.394538846" lastFinishedPulling="2026-01-24 03:47:40.856314628 +0000 UTC m=+357.893405353" observedRunningTime="2026-01-24 03:47:41.443502411 +0000 UTC m=+358.480593136" watchObservedRunningTime="2026-01-24 03:47:41.462554466 +0000 UTC m=+358.499645191" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.047436 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.048151 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.109287 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.244107 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.244306 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.312789 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.488295 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmps9" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.522133 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4m4h" Jan 24 03:47:45 crc kubenswrapper[4772]: I0124 03:47:45.672458 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" podUID="9b58435a-62c0-4129-a8d5-434a75e0f600" containerName="registry" containerID="cri-o://fd013c1266b873b081e9f4593f584bb12918cffde42c57706607b331d879acb6" gracePeriod=30 Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.456677 4772 generic.go:334] "Generic (PLEG): container finished" podID="9b58435a-62c0-4129-a8d5-434a75e0f600" containerID="fd013c1266b873b081e9f4593f584bb12918cffde42c57706607b331d879acb6" exitCode=0 Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.456792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" event={"ID":"9b58435a-62c0-4129-a8d5-434a75e0f600","Type":"ContainerDied","Data":"fd013c1266b873b081e9f4593f584bb12918cffde42c57706607b331d879acb6"} Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.613677 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659156 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659311 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659414 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659447 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659578 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wnrr\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.659854 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9b58435a-62c0-4129-a8d5-434a75e0f600\" (UID: \"9b58435a-62c0-4129-a8d5-434a75e0f600\") " Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.660547 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.661608 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.669809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.669838 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.671123 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr" (OuterVolumeSpecName: "kube-api-access-8wnrr") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "kube-api-access-8wnrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.678711 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.679611 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.685682 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9b58435a-62c0-4129-a8d5-434a75e0f600" (UID: "9b58435a-62c0-4129-a8d5-434a75e0f600"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762658 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b58435a-62c0-4129-a8d5-434a75e0f600-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762760 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762783 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wnrr\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-kube-api-access-8wnrr\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762800 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762815 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762829 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b58435a-62c0-4129-a8d5-434a75e0f600-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.762844 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b58435a-62c0-4129-a8d5-434a75e0f600-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.900398 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:47:46 crc kubenswrapper[4772]: I0124 03:47:46.900523 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.430287 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.430801 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.463985 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" event={"ID":"9b58435a-62c0-4129-a8d5-434a75e0f600","Type":"ContainerDied","Data":"f2a425af17e5292a1342501901f312fa74c87f29783aabc992d1ebd457fa0d94"} Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.464667 4772 scope.go:117] "RemoveContainer" containerID="fd013c1266b873b081e9f4593f584bb12918cffde42c57706607b331d879acb6" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.464010 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2sfxs" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.494037 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.522425 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.546073 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2sfxs"] Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.554101 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qmzlq" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.630866 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.630958 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.667655 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b58435a-62c0-4129-a8d5-434a75e0f600" path="/var/lib/kubelet/pods/9b58435a-62c0-4129-a8d5-434a75e0f600/volumes" Jan 24 03:47:47 crc kubenswrapper[4772]: I0124 03:47:47.677579 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:47:48 crc kubenswrapper[4772]: I0124 03:47:48.512518 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dk99t" Jan 24 03:48:16 crc kubenswrapper[4772]: I0124 03:48:16.899401 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:48:16 crc kubenswrapper[4772]: I0124 03:48:16.899983 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:48:46 crc kubenswrapper[4772]: I0124 03:48:46.900027 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:46.900679 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:46.900769 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:46.901495 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:46.901572 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc" gracePeriod=600 Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:47.884478 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc" exitCode=0 Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:47.885373 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc"} Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:47.885420 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75"} Jan 24 03:48:47 crc kubenswrapper[4772]: I0124 03:48:47.885448 4772 scope.go:117] "RemoveContainer" containerID="4efd19f9a344bcbb8ffbdaaff986e72c5e4c07c6764e824e6a084b86e440cdb7" Jan 24 03:51:16 crc kubenswrapper[4772]: I0124 03:51:16.900038 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:51:16 crc kubenswrapper[4772]: I0124 03:51:16.900721 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:51:46 crc kubenswrapper[4772]: I0124 03:51:46.899490 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:51:46 crc kubenswrapper[4772]: I0124 03:51:46.900140 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:52:16 crc kubenswrapper[4772]: I0124 03:52:16.900029 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:52:16 crc kubenswrapper[4772]: I0124 03:52:16.900505 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:52:16 crc kubenswrapper[4772]: I0124 03:52:16.900555 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:52:16 crc kubenswrapper[4772]: I0124 03:52:16.901112 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 03:52:16 crc kubenswrapper[4772]: I0124 03:52:16.901161 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75" gracePeriod=600 Jan 24 03:52:17 crc kubenswrapper[4772]: I0124 03:52:17.379600 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75" exitCode=0 Jan 24 03:52:17 crc kubenswrapper[4772]: I0124 03:52:17.379628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75"} Jan 24 03:52:17 crc kubenswrapper[4772]: I0124 03:52:17.379951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003"} Jan 24 03:52:17 crc kubenswrapper[4772]: I0124 03:52:17.379979 4772 scope.go:117] "RemoveContainer" containerID="c89de51d5757a7263903a81bb9b304680a026a0c4e151b6af06d4d8c1040aabc" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.406608 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c46s"] Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.412987 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-node" containerID="cri-o://1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.412984 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="northd" containerID="cri-o://be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.413046 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="sbdb" containerID="cri-o://6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.413004 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-acl-logging" containerID="cri-o://7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.413031 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.413011 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="nbdb" containerID="cri-o://3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.413199 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-controller" containerID="cri-o://6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.450357 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" containerID="cri-o://7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" gracePeriod=30 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.631794 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/2.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.632282 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/1.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.632320 4772 generic.go:334] "Generic (PLEG): container finished" podID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" containerID="4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed" exitCode=2 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.632384 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerDied","Data":"4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.632418 4772 scope.go:117] "RemoveContainer" containerID="358648f4a0533b7b8181a90240f8686fd6c2cf2d2699dc15b93d0d2dc60587e9" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.633006 4772 scope.go:117] "RemoveContainer" containerID="4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.633350 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kqp8g_openshift-multus(3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d)\"" pod="openshift-multus/multus-kqp8g" podUID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.636951 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovnkube-controller/3.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.641880 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-acl-logging/0.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642435 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-controller/0.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642835 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" exitCode=0 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642862 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" exitCode=0 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642872 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" exitCode=0 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642880 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" exitCode=143 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642889 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" exitCode=143 Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642922 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.642989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.643007 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.643022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.643035 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76"} Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.740502 4772 scope.go:117] "RemoveContainer" containerID="20c13fe4fca72027acbfaefc779cd9cc3fa418988c38f1129c484060aefb9267" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.755727 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-acl-logging/0.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.756243 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-controller/0.log" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.756592 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.811965 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-66glq"] Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812206 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-node" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812222 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-node" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812234 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812243 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812257 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="northd" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812265 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="northd" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812281 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812289 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812299 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="sbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812307 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="sbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812320 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="nbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812328 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="nbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812341 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kubecfg-setup" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812350 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kubecfg-setup" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812361 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b58435a-62c0-4129-a8d5-434a75e0f600" containerName="registry" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812369 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b58435a-62c0-4129-a8d5-434a75e0f600" containerName="registry" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812382 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812390 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812402 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-acl-logging" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812410 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-acl-logging" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812420 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812429 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812439 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812447 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812455 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812463 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812571 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812585 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812594 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812609 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812618 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812628 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="sbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812637 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovn-acl-logging" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812648 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b58435a-62c0-4129-a8d5-434a75e0f600" containerName="registry" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812657 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="northd" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812666 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="kube-rbac-proxy-node" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812677 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="nbdb" Jan 24 03:52:54 crc kubenswrapper[4772]: E0124 03:52:54.812815 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812825 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.812931 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.813210 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerName="ovnkube-controller" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.815081 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824125 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824162 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824183 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824200 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824220 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824226 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824244 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824304 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash" (OuterVolumeSpecName: "host-slash") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824334 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-slash\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-script-lib\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824469 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-node-log\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824507 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-systemd-units\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824529 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824554 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-env-overrides\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824650 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824650 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-config\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824711 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-log-socket\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.824935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-systemd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825060 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-etc-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825114 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-netns\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-bin\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-netd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825206 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-ovn\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-var-lib-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrhw\" (UniqueName: \"kubernetes.io/projected/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-kube-api-access-prrhw\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825290 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-kubelet\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825357 4772 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-slash\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825373 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825384 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825394 4772 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.825405 4772 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926265 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926349 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926369 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926397 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926415 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926509 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926551 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926524 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926531 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926609 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926834 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926870 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926891 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket" (OuterVolumeSpecName: "log-socket") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926930 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log" (OuterVolumeSpecName: "node-log") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926950 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dn2g\" (UniqueName: \"kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926968 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926973 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn\") pod \"849e85f7-2aca-4f00-a9be-a5f40979ad26\" (UID: \"849e85f7-2aca-4f00-a9be-a5f40979ad26\") " Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.926990 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927024 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927272 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-slash\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927313 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-script-lib\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-node-log\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-systemd-units\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927464 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-env-overrides\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927518 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-config\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927560 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-log-socket\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927622 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-systemd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-etc-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927710 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-netns\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927754 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-slash\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927782 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-bin\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927842 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-netd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-log-socket\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927873 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-ovn\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-ovn\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927904 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927943 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-var-lib-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927944 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-systemd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-etc-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-var-lib-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927988 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-systemd-units\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928019 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrhw\" (UniqueName: \"kubernetes.io/projected/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-kube-api-access-prrhw\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928039 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-run-openvswitch\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928049 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-kubelet\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928087 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-run-netns\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.927642 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-node-log\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928299 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-bin\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928332 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-cni-netd\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928574 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-env-overrides\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-config\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928720 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/849e85f7-2aca-4f00-a9be-a5f40979ad26-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928761 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928784 4772 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928799 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928811 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928823 4772 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928839 4772 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928852 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928866 4772 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-node-log\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928878 4772 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-log-socket\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928891 4772 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928904 4772 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928820 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovnkube-script-lib\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.928924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-host-kubelet\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.933225 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g" (OuterVolumeSpecName: "kube-api-access-6dn2g") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "kube-api-access-6dn2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.933434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.933693 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.940176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "849e85f7-2aca-4f00-a9be-a5f40979ad26" (UID: "849e85f7-2aca-4f00-a9be-a5f40979ad26"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 03:52:54 crc kubenswrapper[4772]: I0124 03:52:54.946197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrhw\" (UniqueName: \"kubernetes.io/projected/a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e-kube-api-access-prrhw\") pod \"ovnkube-node-66glq\" (UID: \"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.029651 4772 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/849e85f7-2aca-4f00-a9be-a5f40979ad26-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.029684 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/849e85f7-2aca-4f00-a9be-a5f40979ad26-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.029696 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dn2g\" (UniqueName: \"kubernetes.io/projected/849e85f7-2aca-4f00-a9be-a5f40979ad26-kube-api-access-6dn2g\") on node \"crc\" DevicePath \"\"" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.129072 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.661308 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/2.log" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.664267 4772 generic.go:334] "Generic (PLEG): container finished" podID="a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e" containerID="84794e21cd85a67acd3357f1a59e39a42406732a76f3e91570163e4b0c5abc41" exitCode=0 Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.670103 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerDied","Data":"84794e21cd85a67acd3357f1a59e39a42406732a76f3e91570163e4b0c5abc41"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.670149 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"81e20c1808ffb92d9ba59b40f9337153c31c5033178e7577526b017e7b9a5123"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.673611 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-acl-logging/0.log" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674276 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c46s_849e85f7-2aca-4f00-a9be-a5f40979ad26/ovn-controller/0.log" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674786 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" exitCode=0 Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674831 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" exitCode=0 Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674839 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e85f7-2aca-4f00-a9be-a5f40979ad26" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" exitCode=0 Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674887 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674932 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674967 4772 scope.go:117] "RemoveContainer" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.674936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.675136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c46s" event={"ID":"849e85f7-2aca-4f00-a9be-a5f40979ad26","Type":"ContainerDied","Data":"9b95aa43972ce006c3045df4b91f2149650d66d2921a0c4b42ecc09893969877"} Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.692626 4772 scope.go:117] "RemoveContainer" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.710839 4772 scope.go:117] "RemoveContainer" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.736891 4772 scope.go:117] "RemoveContainer" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.752419 4772 scope.go:117] "RemoveContainer" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.774943 4772 scope.go:117] "RemoveContainer" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.791588 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c46s"] Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.799925 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c46s"] Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.801258 4772 scope.go:117] "RemoveContainer" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.818541 4772 scope.go:117] "RemoveContainer" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.832223 4772 scope.go:117] "RemoveContainer" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.854721 4772 scope.go:117] "RemoveContainer" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855101 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": container with ID starting with 7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7 not found: ID does not exist" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855130 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7"} err="failed to get container status \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": rpc error: code = NotFound desc = could not find container \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": container with ID starting with 7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855157 4772 scope.go:117] "RemoveContainer" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855317 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": container with ID starting with 6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8 not found: ID does not exist" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855336 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8"} err="failed to get container status \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": rpc error: code = NotFound desc = could not find container \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": container with ID starting with 6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855348 4772 scope.go:117] "RemoveContainer" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855481 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": container with ID starting with 3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3 not found: ID does not exist" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855499 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3"} err="failed to get container status \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": rpc error: code = NotFound desc = could not find container \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": container with ID starting with 3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855512 4772 scope.go:117] "RemoveContainer" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855631 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": container with ID starting with be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef not found: ID does not exist" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855649 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef"} err="failed to get container status \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": rpc error: code = NotFound desc = could not find container \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": container with ID starting with be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855659 4772 scope.go:117] "RemoveContainer" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855816 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": container with ID starting with 288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3 not found: ID does not exist" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855837 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3"} err="failed to get container status \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": rpc error: code = NotFound desc = could not find container \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": container with ID starting with 288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855850 4772 scope.go:117] "RemoveContainer" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.855971 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": container with ID starting with 1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e not found: ID does not exist" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.855989 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e"} err="failed to get container status \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": rpc error: code = NotFound desc = could not find container \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": container with ID starting with 1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856000 4772 scope.go:117] "RemoveContainer" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.856115 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": container with ID starting with 7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d not found: ID does not exist" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856133 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d"} err="failed to get container status \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": rpc error: code = NotFound desc = could not find container \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": container with ID starting with 7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856144 4772 scope.go:117] "RemoveContainer" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.856261 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": container with ID starting with 6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76 not found: ID does not exist" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856303 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76"} err="failed to get container status \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": rpc error: code = NotFound desc = could not find container \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": container with ID starting with 6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856316 4772 scope.go:117] "RemoveContainer" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" Jan 24 03:52:55 crc kubenswrapper[4772]: E0124 03:52:55.856480 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": container with ID starting with 427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9 not found: ID does not exist" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856502 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9"} err="failed to get container status \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": rpc error: code = NotFound desc = could not find container \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": container with ID starting with 427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856517 4772 scope.go:117] "RemoveContainer" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856803 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7"} err="failed to get container status \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": rpc error: code = NotFound desc = could not find container \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": container with ID starting with 7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.856856 4772 scope.go:117] "RemoveContainer" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.861433 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8"} err="failed to get container status \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": rpc error: code = NotFound desc = could not find container \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": container with ID starting with 6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.861483 4772 scope.go:117] "RemoveContainer" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.861820 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3"} err="failed to get container status \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": rpc error: code = NotFound desc = could not find container \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": container with ID starting with 3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.861853 4772 scope.go:117] "RemoveContainer" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862056 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef"} err="failed to get container status \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": rpc error: code = NotFound desc = could not find container \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": container with ID starting with be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862081 4772 scope.go:117] "RemoveContainer" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862263 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3"} err="failed to get container status \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": rpc error: code = NotFound desc = could not find container \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": container with ID starting with 288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862290 4772 scope.go:117] "RemoveContainer" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862510 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e"} err="failed to get container status \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": rpc error: code = NotFound desc = could not find container \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": container with ID starting with 1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862541 4772 scope.go:117] "RemoveContainer" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862731 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d"} err="failed to get container status \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": rpc error: code = NotFound desc = could not find container \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": container with ID starting with 7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862784 4772 scope.go:117] "RemoveContainer" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862962 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76"} err="failed to get container status \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": rpc error: code = NotFound desc = could not find container \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": container with ID starting with 6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.862985 4772 scope.go:117] "RemoveContainer" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.863147 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9"} err="failed to get container status \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": rpc error: code = NotFound desc = could not find container \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": container with ID starting with 427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.863174 4772 scope.go:117] "RemoveContainer" containerID="7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.863436 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7"} err="failed to get container status \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": rpc error: code = NotFound desc = could not find container \"7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7\": container with ID starting with 7bbeefb72ba58266ac1858e4f6ef7008aca17628fcc68f1b2ffd9c41353221c7 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.863472 4772 scope.go:117] "RemoveContainer" containerID="6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865362 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8"} err="failed to get container status \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": rpc error: code = NotFound desc = could not find container \"6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8\": container with ID starting with 6809ffe686470b553a405751da31fa934959f0d12c9edb5bf8c0938ac462f4d8 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865397 4772 scope.go:117] "RemoveContainer" containerID="3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865642 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3"} err="failed to get container status \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": rpc error: code = NotFound desc = could not find container \"3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3\": container with ID starting with 3eca98cff85b8c059068f0842c08be045ce4131f44aeff859b452c8e94f2f7c3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865673 4772 scope.go:117] "RemoveContainer" containerID="be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865897 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef"} err="failed to get container status \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": rpc error: code = NotFound desc = could not find container \"be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef\": container with ID starting with be0a79c09ad7603575daa400f7afc97d7f570411044339e4d0ef25b59374faef not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.865923 4772 scope.go:117] "RemoveContainer" containerID="288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866090 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3"} err="failed to get container status \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": rpc error: code = NotFound desc = could not find container \"288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3\": container with ID starting with 288c73595eab9dc36a20946e4d07bc15ed82ad85e012daa39eda383218a0a6d3 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866109 4772 scope.go:117] "RemoveContainer" containerID="1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866269 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e"} err="failed to get container status \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": rpc error: code = NotFound desc = could not find container \"1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e\": container with ID starting with 1a61553e47fbadf5ddb3f438ca0c30a4dd665660697675f2bf1804ce93a96b1e not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866289 4772 scope.go:117] "RemoveContainer" containerID="7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866442 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d"} err="failed to get container status \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": rpc error: code = NotFound desc = could not find container \"7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d\": container with ID starting with 7c346206f0ec4af116a6b46f1fb82526fd40c00a95d1d22048747afdf091b01d not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866460 4772 scope.go:117] "RemoveContainer" containerID="6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866597 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76"} err="failed to get container status \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": rpc error: code = NotFound desc = could not find container \"6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76\": container with ID starting with 6e2c16b380098acf476f1438bd70df857bb2f9cb866e5629319fb4df6a6bed76 not found: ID does not exist" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866613 4772 scope.go:117] "RemoveContainer" containerID="427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9" Jan 24 03:52:55 crc kubenswrapper[4772]: I0124 03:52:55.866768 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9"} err="failed to get container status \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": rpc error: code = NotFound desc = could not find container \"427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9\": container with ID starting with 427255d8127ccaf7bddf4e873d490ee276400fee78c04dc5946f78b5a5aa87d9 not found: ID does not exist" Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.686787 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"c35d3340b077248b31d3ca217f8c09a1c04e285dae77b7bbfc87060cc4694426"} Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.687263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"ff4294e7e2a8418a9fcfe6ad2d1e67facc5e2712e471ab56317f250b9a0a4d2b"} Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.687285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"a7ca0b32d40a90a54353078c248cabfacf9202334549f791ec71998b7752ef95"} Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.687302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"66cb36df33153fd730fcbdd889161b5ccc9a0ef3fd00f82778e9d4bbe25c435e"} Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.687316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"d2cd8427164c36a8667114bca6e83f922946d1a531d7101f049f2004c3e5d7be"} Jan 24 03:52:56 crc kubenswrapper[4772]: I0124 03:52:56.687329 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"6cb48c2f71f3620e10b5fa646b1a545735eb84c2a7ee85c15b57aa49b6b5b07e"} Jan 24 03:52:57 crc kubenswrapper[4772]: I0124 03:52:57.673374 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849e85f7-2aca-4f00-a9be-a5f40979ad26" path="/var/lib/kubelet/pods/849e85f7-2aca-4f00-a9be-a5f40979ad26/volumes" Jan 24 03:52:58 crc kubenswrapper[4772]: I0124 03:52:58.706338 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"7094db0d959adfd04ba2f515818d0be675da9b0cb626e1be23fcbe5b50657b65"} Jan 24 03:53:01 crc kubenswrapper[4772]: I0124 03:53:01.730119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" event={"ID":"a3a973b3-1cf1-4ffd-95c0-b6bf81aa3f1e","Type":"ContainerStarted","Data":"fce0be50cc8281f749fd3e3c1dbc3c2675e51cdcdc1e9e3cbebd4b3a517fece3"} Jan 24 03:53:01 crc kubenswrapper[4772]: I0124 03:53:01.730682 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:01 crc kubenswrapper[4772]: I0124 03:53:01.730698 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:01 crc kubenswrapper[4772]: I0124 03:53:01.758096 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:01 crc kubenswrapper[4772]: I0124 03:53:01.774324 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" podStartSLOduration=7.774304131 podStartE2EDuration="7.774304131s" podCreationTimestamp="2026-01-24 03:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:53:01.769199462 +0000 UTC m=+678.806290187" watchObservedRunningTime="2026-01-24 03:53:01.774304131 +0000 UTC m=+678.811394866" Jan 24 03:53:02 crc kubenswrapper[4772]: I0124 03:53:02.735975 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:02 crc kubenswrapper[4772]: I0124 03:53:02.769694 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:05 crc kubenswrapper[4772]: I0124 03:53:05.658650 4772 scope.go:117] "RemoveContainer" containerID="4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed" Jan 24 03:53:05 crc kubenswrapper[4772]: E0124 03:53:05.658919 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kqp8g_openshift-multus(3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d)\"" pod="openshift-multus/multus-kqp8g" podUID="3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d" Jan 24 03:53:18 crc kubenswrapper[4772]: I0124 03:53:18.658503 4772 scope.go:117] "RemoveContainer" containerID="4303a17707279e168ab66f97fbff46b2d3b3e2c3dff4c390520b5d78c75594ed" Jan 24 03:53:19 crc kubenswrapper[4772]: I0124 03:53:19.846120 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kqp8g_3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d/kube-multus/2.log" Jan 24 03:53:19 crc kubenswrapper[4772]: I0124 03:53:19.846533 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kqp8g" event={"ID":"3bb38f67-b74f-4d5b-9d83-25fc07bcbb2d","Type":"ContainerStarted","Data":"c238d5f8419e1ad298882cea6fed49a651484304816f76ffc6489600fd9fe446"} Jan 24 03:53:21 crc kubenswrapper[4772]: I0124 03:53:21.991607 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd"] Jan 24 03:53:21 crc kubenswrapper[4772]: I0124 03:53:21.993589 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:21 crc kubenswrapper[4772]: I0124 03:53:21.997081 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.006564 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd"] Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.085048 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.085130 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw7q\" (UniqueName: \"kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.085167 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.186314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.186389 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw7q\" (UniqueName: \"kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.186412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.186883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.186884 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.207640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw7q\" (UniqueName: \"kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.321560 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.584398 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd"] Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.867478 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerStarted","Data":"fe02eda98898fbd41b37137e99436a206e0a2c1239a7640a65ee4fbe83832dbc"} Jan 24 03:53:22 crc kubenswrapper[4772]: I0124 03:53:22.867551 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerStarted","Data":"0350fd6951266b515025e3e404efee667b3c9e7e8da27e8752ba5ded46055767"} Jan 24 03:53:23 crc kubenswrapper[4772]: I0124 03:53:23.876777 4772 generic.go:334] "Generic (PLEG): container finished" podID="721069d6-d930-4768-8902-ccdbcde32201" containerID="fe02eda98898fbd41b37137e99436a206e0a2c1239a7640a65ee4fbe83832dbc" exitCode=0 Jan 24 03:53:23 crc kubenswrapper[4772]: I0124 03:53:23.877090 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerDied","Data":"fe02eda98898fbd41b37137e99436a206e0a2c1239a7640a65ee4fbe83832dbc"} Jan 24 03:53:23 crc kubenswrapper[4772]: I0124 03:53:23.882852 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 03:53:25 crc kubenswrapper[4772]: I0124 03:53:25.154212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-66glq" Jan 24 03:53:26 crc kubenswrapper[4772]: I0124 03:53:26.897681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerStarted","Data":"d1a547ec34f83c039e51ed3a7c4b19e5c11325706eeffd417df067ab177684dc"} Jan 24 03:53:27 crc kubenswrapper[4772]: I0124 03:53:27.906489 4772 generic.go:334] "Generic (PLEG): container finished" podID="721069d6-d930-4768-8902-ccdbcde32201" containerID="d1a547ec34f83c039e51ed3a7c4b19e5c11325706eeffd417df067ab177684dc" exitCode=0 Jan 24 03:53:27 crc kubenswrapper[4772]: I0124 03:53:27.906604 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerDied","Data":"d1a547ec34f83c039e51ed3a7c4b19e5c11325706eeffd417df067ab177684dc"} Jan 24 03:53:28 crc kubenswrapper[4772]: I0124 03:53:28.916242 4772 generic.go:334] "Generic (PLEG): container finished" podID="721069d6-d930-4768-8902-ccdbcde32201" containerID="6f1b7e6c32144671c8f50b61e3729c1a81ad2577fd1682e4aea165a55c2617d7" exitCode=0 Jan 24 03:53:28 crc kubenswrapper[4772]: I0124 03:53:28.916315 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerDied","Data":"6f1b7e6c32144671c8f50b61e3729c1a81ad2577fd1682e4aea165a55c2617d7"} Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.262078 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.298431 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw7q\" (UniqueName: \"kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q\") pod \"721069d6-d930-4768-8902-ccdbcde32201\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.298493 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle\") pod \"721069d6-d930-4768-8902-ccdbcde32201\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.298620 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util\") pod \"721069d6-d930-4768-8902-ccdbcde32201\" (UID: \"721069d6-d930-4768-8902-ccdbcde32201\") " Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.300507 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle" (OuterVolumeSpecName: "bundle") pod "721069d6-d930-4768-8902-ccdbcde32201" (UID: "721069d6-d930-4768-8902-ccdbcde32201"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.305295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q" (OuterVolumeSpecName: "kube-api-access-bzw7q") pod "721069d6-d930-4768-8902-ccdbcde32201" (UID: "721069d6-d930-4768-8902-ccdbcde32201"). InnerVolumeSpecName "kube-api-access-bzw7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.309949 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util" (OuterVolumeSpecName: "util") pod "721069d6-d930-4768-8902-ccdbcde32201" (UID: "721069d6-d930-4768-8902-ccdbcde32201"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.400348 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.400888 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw7q\" (UniqueName: \"kubernetes.io/projected/721069d6-d930-4768-8902-ccdbcde32201-kube-api-access-bzw7q\") on node \"crc\" DevicePath \"\"" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.400929 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/721069d6-d930-4768-8902-ccdbcde32201-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.936909 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" event={"ID":"721069d6-d930-4768-8902-ccdbcde32201","Type":"ContainerDied","Data":"0350fd6951266b515025e3e404efee667b3c9e7e8da27e8752ba5ded46055767"} Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.936981 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd" Jan 24 03:53:30 crc kubenswrapper[4772]: I0124 03:53:30.937297 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0350fd6951266b515025e3e404efee667b3c9e7e8da27e8752ba5ded46055767" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.102386 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85"] Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.103047 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="pull" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.103059 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="pull" Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.103076 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="util" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.103081 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="util" Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.103088 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="extract" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.103094 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="extract" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.103201 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="721069d6-d930-4768-8902-ccdbcde32201" containerName="extract" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.103561 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: W0124 03:53:40.110144 4772 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-qgzf9": failed to list *v1.Secret: secrets "manager-account-dockercfg-qgzf9" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.110183 4772 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-qgzf9\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"manager-account-dockercfg-qgzf9\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 03:53:40 crc kubenswrapper[4772]: W0124 03:53:40.110228 4772 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.110247 4772 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 03:53:40 crc kubenswrapper[4772]: W0124 03:53:40.110298 4772 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.110308 4772 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 03:53:40 crc kubenswrapper[4772]: W0124 03:53:40.111354 4772 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.111384 4772 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 03:53:40 crc kubenswrapper[4772]: W0124 03:53:40.111955 4772 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 24 03:53:40 crc kubenswrapper[4772]: E0124 03:53:40.111977 4772 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.158715 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85"] Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.235327 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-webhook-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.235614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcndn\" (UniqueName: \"kubernetes.io/projected/1e5b7120-c263-41a3-9360-d8c132d6235c-kube-api-access-mcndn\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.235705 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-apiservice-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.321125 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp"] Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.322083 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.325247 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.325298 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-59kwm" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.325822 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.337051 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-webhook-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.337524 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcndn\" (UniqueName: \"kubernetes.io/projected/1e5b7120-c263-41a3-9360-d8c132d6235c-kube-api-access-mcndn\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.337602 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-apiservice-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.340997 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp"] Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.438728 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmcm\" (UniqueName: \"kubernetes.io/projected/0e9057ec-4b8c-4cea-a3d3-52001b746757-kube-api-access-9tmcm\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.438829 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-webhook-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.438906 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-apiservice-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.539540 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-apiservice-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.539637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmcm\" (UniqueName: \"kubernetes.io/projected/0e9057ec-4b8c-4cea-a3d3-52001b746757-kube-api-access-9tmcm\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.539677 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-webhook-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.554718 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-apiservice-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.555102 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e9057ec-4b8c-4cea-a3d3-52001b746757-webhook-cert\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.934768 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.978332 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.992640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmcm\" (UniqueName: \"kubernetes.io/projected/0e9057ec-4b8c-4cea-a3d3-52001b746757-kube-api-access-9tmcm\") pod \"metallb-operator-webhook-server-6f9db7bdcf-9p4fp\" (UID: \"0e9057ec-4b8c-4cea-a3d3-52001b746757\") " pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:40 crc kubenswrapper[4772]: I0124 03:53:40.994353 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcndn\" (UniqueName: \"kubernetes.io/projected/1e5b7120-c263-41a3-9360-d8c132d6235c-kube-api-access-mcndn\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.168755 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.181939 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-webhook-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.183330 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e5b7120-c263-41a3-9360-d8c132d6235c-apiservice-cert\") pod \"metallb-operator-controller-manager-7fd7b6df9d-zhx85\" (UID: \"1e5b7120-c263-41a3-9360-d8c132d6235c\") " pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.236580 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.355841 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qgzf9" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.359598 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.539033 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp"] Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.598134 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 24 03:53:41 crc kubenswrapper[4772]: I0124 03:53:41.601187 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85"] Jan 24 03:53:42 crc kubenswrapper[4772]: I0124 03:53:42.011124 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" event={"ID":"0e9057ec-4b8c-4cea-a3d3-52001b746757","Type":"ContainerStarted","Data":"ec35f163165524c2495ceb29b0c64a2a3a8759d9699999a3aebd0104f1d9acd0"} Jan 24 03:53:42 crc kubenswrapper[4772]: I0124 03:53:42.012097 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" event={"ID":"1e5b7120-c263-41a3-9360-d8c132d6235c","Type":"ContainerStarted","Data":"030004a04a359a1067f572df219adaf7ae863916b5236bd4879549a07e602f64"} Jan 24 03:53:48 crc kubenswrapper[4772]: I0124 03:53:48.071567 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" event={"ID":"1e5b7120-c263-41a3-9360-d8c132d6235c","Type":"ContainerStarted","Data":"b4f9ff4b54bbffb6760fd406d3a0d1b679e3d0cf5a8db37b2af7fa0a481d78e5"} Jan 24 03:53:48 crc kubenswrapper[4772]: I0124 03:53:48.072240 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:53:48 crc kubenswrapper[4772]: I0124 03:53:48.096991 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" podStartSLOduration=1.85582053 podStartE2EDuration="8.096977009s" podCreationTimestamp="2026-01-24 03:53:40 +0000 UTC" firstStartedPulling="2026-01-24 03:53:41.61057466 +0000 UTC m=+718.647665385" lastFinishedPulling="2026-01-24 03:53:47.851731139 +0000 UTC m=+724.888821864" observedRunningTime="2026-01-24 03:53:48.093661632 +0000 UTC m=+725.130752347" watchObservedRunningTime="2026-01-24 03:53:48.096977009 +0000 UTC m=+725.134067734" Jan 24 03:53:49 crc kubenswrapper[4772]: I0124 03:53:49.092266 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" event={"ID":"0e9057ec-4b8c-4cea-a3d3-52001b746757","Type":"ContainerStarted","Data":"cda4e62cea8b8896ed54cc93bf472e4ba333b56348ede97295ea2aa0cf9fb608"} Jan 24 03:53:49 crc kubenswrapper[4772]: I0124 03:53:49.092684 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:53:49 crc kubenswrapper[4772]: I0124 03:53:49.121436 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" podStartSLOduration=2.788790693 podStartE2EDuration="9.121412576s" podCreationTimestamp="2026-01-24 03:53:40 +0000 UTC" firstStartedPulling="2026-01-24 03:53:41.542968299 +0000 UTC m=+718.580059024" lastFinishedPulling="2026-01-24 03:53:47.875590162 +0000 UTC m=+724.912680907" observedRunningTime="2026-01-24 03:53:49.116684507 +0000 UTC m=+726.153775272" watchObservedRunningTime="2026-01-24 03:53:49.121412576 +0000 UTC m=+726.158503311" Jan 24 03:54:01 crc kubenswrapper[4772]: I0124 03:54:01.241397 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f9db7bdcf-9p4fp" Jan 24 03:54:18 crc kubenswrapper[4772]: I0124 03:54:18.129874 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 03:54:21 crc kubenswrapper[4772]: I0124 03:54:21.363210 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fd7b6df9d-zhx85" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.263915 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wpkk4"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.266307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.269369 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.269624 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.269753 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5ngk5" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.270406 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.271264 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.272697 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.281526 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.281854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.281908 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhws9\" (UniqueName: \"kubernetes.io/projected/57fd94bd-5a17-4006-8495-fb020332ba40-kube-api-access-nhws9\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.281956 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-conf\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.281989 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-reloader\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.282016 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-sockets\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.282096 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64wd\" (UniqueName: \"kubernetes.io/projected/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-kube-api-access-b64wd\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.282135 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.282208 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics-certs\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.282232 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-startup\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.348153 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6krl4"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.348958 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.351657 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.351903 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-x5vgv" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.352038 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.357209 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.368322 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8skfz"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.369134 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.371486 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382571 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8skfz"] Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382620 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67ft\" (UniqueName: \"kubernetes.io/projected/af65200d-34f2-478a-9c7b-48c6eb982b11-kube-api-access-n67ft\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382681 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-cert\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382698 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics-certs\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382713 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-startup\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metallb-excludel2\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382825 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhws9\" (UniqueName: \"kubernetes.io/projected/57fd94bd-5a17-4006-8495-fb020332ba40-kube-api-access-nhws9\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-conf\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-reloader\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382885 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcj6\" (UniqueName: \"kubernetes.io/projected/d6566aa5-3d44-45c0-94f7-6c618ac3b626-kube-api-access-dlcj6\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382919 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-sockets\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64wd\" (UniqueName: \"kubernetes.io/projected/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-kube-api-access-b64wd\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.382951 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metrics-certs\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.383054 4772 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.383094 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert podName:57fd94bd-5a17-4006-8495-fb020332ba40 nodeName:}" failed. No retries permitted until 2026-01-24 03:54:22.883079862 +0000 UTC m=+759.920170577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert") pod "frr-k8s-webhook-server-7df86c4f6c-vs95l" (UID: "57fd94bd-5a17-4006-8495-fb020332ba40") : secret "frr-k8s-webhook-server-cert" not found Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.388100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-conf\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.388385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics-certs\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.388655 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-reloader\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.388874 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-sockets\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.389186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-frr-startup\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.389351 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-metrics\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.410726 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64wd\" (UniqueName: \"kubernetes.io/projected/e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7-kube-api-access-b64wd\") pod \"frr-k8s-wpkk4\" (UID: \"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7\") " pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.422270 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhws9\" (UniqueName: \"kubernetes.io/projected/57fd94bd-5a17-4006-8495-fb020332ba40-kube-api-access-nhws9\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483535 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcj6\" (UniqueName: \"kubernetes.io/projected/d6566aa5-3d44-45c0-94f7-6c618ac3b626-kube-api-access-dlcj6\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483582 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metrics-certs\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67ft\" (UniqueName: \"kubernetes.io/projected/af65200d-34f2-478a-9c7b-48c6eb982b11-kube-api-access-n67ft\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-cert\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483685 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.483701 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metallb-excludel2\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.484101 4772 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.484147 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs podName:af65200d-34f2-478a-9c7b-48c6eb982b11 nodeName:}" failed. No retries permitted until 2026-01-24 03:54:22.984133717 +0000 UTC m=+760.021224442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs") pod "controller-6968d8fdc4-8skfz" (UID: "af65200d-34f2-478a-9c7b-48c6eb982b11") : secret "controller-certs-secret" not found Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.484325 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metallb-excludel2\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.484410 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.484439 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist podName:d6566aa5-3d44-45c0-94f7-6c618ac3b626 nodeName:}" failed. No retries permitted until 2026-01-24 03:54:22.984431116 +0000 UTC m=+760.021521841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist") pod "speaker-6krl4" (UID: "d6566aa5-3d44-45c0-94f7-6c618ac3b626") : secret "metallb-memberlist" not found Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.485826 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.487305 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-metrics-certs\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.497834 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-cert\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.500317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcj6\" (UniqueName: \"kubernetes.io/projected/d6566aa5-3d44-45c0-94f7-6c618ac3b626-kube-api-access-dlcj6\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.507568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67ft\" (UniqueName: \"kubernetes.io/projected/af65200d-34f2-478a-9c7b-48c6eb982b11-kube-api-access-n67ft\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.584819 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.889697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.893755 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57fd94bd-5a17-4006-8495-fb020332ba40-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vs95l\" (UID: \"57fd94bd-5a17-4006-8495-fb020332ba40\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.990831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.990925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.990986 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 03:54:22 crc kubenswrapper[4772]: E0124 03:54:22.991052 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist podName:d6566aa5-3d44-45c0-94f7-6c618ac3b626 nodeName:}" failed. No retries permitted until 2026-01-24 03:54:23.99103656 +0000 UTC m=+761.028127285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist") pod "speaker-6krl4" (UID: "d6566aa5-3d44-45c0-94f7-6c618ac3b626") : secret "metallb-memberlist" not found Jan 24 03:54:22 crc kubenswrapper[4772]: I0124 03:54:22.996316 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af65200d-34f2-478a-9c7b-48c6eb982b11-metrics-certs\") pod \"controller-6968d8fdc4-8skfz\" (UID: \"af65200d-34f2-478a-9c7b-48c6eb982b11\") " pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:23 crc kubenswrapper[4772]: I0124 03:54:23.192971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:23 crc kubenswrapper[4772]: I0124 03:54:23.283134 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:23 crc kubenswrapper[4772]: I0124 03:54:23.296894 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"937d21332ec3c0d75c2ab19af612108084f186cc4f457de768b886a0967a6036"} Jan 24 03:54:23 crc kubenswrapper[4772]: I0124 03:54:23.441116 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l"] Jan 24 03:54:23 crc kubenswrapper[4772]: W0124 03:54:23.448298 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57fd94bd_5a17_4006_8495_fb020332ba40.slice/crio-4e7177134c2d386eb6060852b5001af4c137fa06b6fd26dcf065450abed7c2e0 WatchSource:0}: Error finding container 4e7177134c2d386eb6060852b5001af4c137fa06b6fd26dcf065450abed7c2e0: Status 404 returned error can't find the container with id 4e7177134c2d386eb6060852b5001af4c137fa06b6fd26dcf065450abed7c2e0 Jan 24 03:54:23 crc kubenswrapper[4772]: I0124 03:54:23.780930 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8skfz"] Jan 24 03:54:24 crc kubenswrapper[4772]: I0124 03:54:24.002615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:24 crc kubenswrapper[4772]: E0124 03:54:24.003186 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 03:54:24 crc kubenswrapper[4772]: E0124 03:54:24.003240 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist podName:d6566aa5-3d44-45c0-94f7-6c618ac3b626 nodeName:}" failed. No retries permitted until 2026-01-24 03:54:26.003222457 +0000 UTC m=+763.040313172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist") pod "speaker-6krl4" (UID: "d6566aa5-3d44-45c0-94f7-6c618ac3b626") : secret "metallb-memberlist" not found Jan 24 03:54:24 crc kubenswrapper[4772]: I0124 03:54:24.310558 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" event={"ID":"57fd94bd-5a17-4006-8495-fb020332ba40","Type":"ContainerStarted","Data":"4e7177134c2d386eb6060852b5001af4c137fa06b6fd26dcf065450abed7c2e0"} Jan 24 03:54:24 crc kubenswrapper[4772]: I0124 03:54:24.312004 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8skfz" event={"ID":"af65200d-34f2-478a-9c7b-48c6eb982b11","Type":"ContainerStarted","Data":"bbf1ecf9158efe58c6270549f8dd8508a49b257c4ec8fe17a43339499019ad81"} Jan 24 03:54:24 crc kubenswrapper[4772]: I0124 03:54:24.312029 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8skfz" event={"ID":"af65200d-34f2-478a-9c7b-48c6eb982b11","Type":"ContainerStarted","Data":"a4cd91d22da05477b47b2da277711e14b2db45b36b2bb1d120f30e436f06aeaa"} Jan 24 03:54:26 crc kubenswrapper[4772]: I0124 03:54:26.045142 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:26 crc kubenswrapper[4772]: I0124 03:54:26.062223 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6566aa5-3d44-45c0-94f7-6c618ac3b626-memberlist\") pod \"speaker-6krl4\" (UID: \"d6566aa5-3d44-45c0-94f7-6c618ac3b626\") " pod="metallb-system/speaker-6krl4" Jan 24 03:54:26 crc kubenswrapper[4772]: I0124 03:54:26.271325 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6krl4" Jan 24 03:54:27 crc kubenswrapper[4772]: I0124 03:54:27.334503 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6krl4" event={"ID":"d6566aa5-3d44-45c0-94f7-6c618ac3b626","Type":"ContainerStarted","Data":"8144d15f59dc325b69cdadc7def407a19e50be7adc90de0106cbeb77646d459b"} Jan 24 03:54:27 crc kubenswrapper[4772]: I0124 03:54:27.334880 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6krl4" event={"ID":"d6566aa5-3d44-45c0-94f7-6c618ac3b626","Type":"ContainerStarted","Data":"e680bdafb6eb6b1a7bdea20b7def6488d9c5803dbbcd8b4e133e2fcafb5bbf86"} Jan 24 03:54:28 crc kubenswrapper[4772]: I0124 03:54:28.347651 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8skfz" event={"ID":"af65200d-34f2-478a-9c7b-48c6eb982b11","Type":"ContainerStarted","Data":"bb6dcb77ca864934b98ccf8aedab32401b8305c85d41d1caf27854a1e72145c4"} Jan 24 03:54:28 crc kubenswrapper[4772]: I0124 03:54:28.348235 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:28 crc kubenswrapper[4772]: I0124 03:54:28.367665 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8skfz" podStartSLOduration=2.71874725 podStartE2EDuration="6.367648598s" podCreationTimestamp="2026-01-24 03:54:22 +0000 UTC" firstStartedPulling="2026-01-24 03:54:23.98090697 +0000 UTC m=+761.017997735" lastFinishedPulling="2026-01-24 03:54:27.629808358 +0000 UTC m=+764.666899083" observedRunningTime="2026-01-24 03:54:28.366940967 +0000 UTC m=+765.404031702" watchObservedRunningTime="2026-01-24 03:54:28.367648598 +0000 UTC m=+765.404739323" Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.370076 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" event={"ID":"57fd94bd-5a17-4006-8495-fb020332ba40","Type":"ContainerStarted","Data":"49040cb7bf257248cab9ae0878bf46206c1fb241303a53fa3470b5bebd169f44"} Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.372006 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.372173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6krl4" event={"ID":"d6566aa5-3d44-45c0-94f7-6c618ac3b626","Type":"ContainerStarted","Data":"0cbb2d4de9d7337b4ac26cbdd469de4db0c94c0094d8b39da0dcd5d7b76f9d7d"} Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.372340 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6krl4" Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.373424 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7" containerID="8f5cdb7c32e56664f55300038d6a4e790db296a7a465b58638c6de160ec0ddba" exitCode=0 Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.373486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerDied","Data":"8f5cdb7c32e56664f55300038d6a4e790db296a7a465b58638c6de160ec0ddba"} Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.404914 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" podStartSLOduration=2.2198974160000002 podStartE2EDuration="10.404889957s" podCreationTimestamp="2026-01-24 03:54:22 +0000 UTC" firstStartedPulling="2026-01-24 03:54:23.451226107 +0000 UTC m=+760.488316822" lastFinishedPulling="2026-01-24 03:54:31.636218638 +0000 UTC m=+768.673309363" observedRunningTime="2026-01-24 03:54:32.400310462 +0000 UTC m=+769.437401207" watchObservedRunningTime="2026-01-24 03:54:32.404889957 +0000 UTC m=+769.441980692" Jan 24 03:54:32 crc kubenswrapper[4772]: I0124 03:54:32.437021 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6krl4" podStartSLOduration=5.470469276 podStartE2EDuration="10.436998922s" podCreationTimestamp="2026-01-24 03:54:22 +0000 UTC" firstStartedPulling="2026-01-24 03:54:26.635121995 +0000 UTC m=+763.672212720" lastFinishedPulling="2026-01-24 03:54:31.601651641 +0000 UTC m=+768.638742366" observedRunningTime="2026-01-24 03:54:32.43184369 +0000 UTC m=+769.468934435" watchObservedRunningTime="2026-01-24 03:54:32.436998922 +0000 UTC m=+769.474089657" Jan 24 03:54:33 crc kubenswrapper[4772]: I0124 03:54:33.287134 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8skfz" Jan 24 03:54:33 crc kubenswrapper[4772]: I0124 03:54:33.383478 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7" containerID="33034f2f7efb5fd710cd174c7f5e3519af8620e21502c516da6e890ccff018a3" exitCode=0 Jan 24 03:54:33 crc kubenswrapper[4772]: I0124 03:54:33.383658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerDied","Data":"33034f2f7efb5fd710cd174c7f5e3519af8620e21502c516da6e890ccff018a3"} Jan 24 03:54:33 crc kubenswrapper[4772]: E0124 03:54:33.685535 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d8b2b6_95a4_4566_9c4c_d38b90c2eea7.slice/crio-conmon-81eaa681f57e72051773cd936587c3272671827d84e7218f3d353d2cf4c1f147.scope\": RecentStats: unable to find data in memory cache]" Jan 24 03:54:34 crc kubenswrapper[4772]: I0124 03:54:34.393163 4772 generic.go:334] "Generic (PLEG): container finished" podID="e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7" containerID="81eaa681f57e72051773cd936587c3272671827d84e7218f3d353d2cf4c1f147" exitCode=0 Jan 24 03:54:34 crc kubenswrapper[4772]: I0124 03:54:34.393255 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerDied","Data":"81eaa681f57e72051773cd936587c3272671827d84e7218f3d353d2cf4c1f147"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402380 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"25bb8af98062f0acc0848f15eb8e4282c4797bb7cdde640c43fa4826fccc3f85"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402716 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"6c90a2e2ad64c06522bec7d613f636db26e9503bd79a9c46f341388ad02a1750"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402729 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402757 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"2cfcec069b2657436a6b0d0c35f274de536ecfe5d3713185d91d943547cb5c0a"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402770 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"eab88e25bade738e696ce292f5fd29b4745a56102810fec2921819cec78fcc65"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"7d21d8bdcaff5e8a123655316dbfe4b379d725f8d3669fc99e40e06b2efed62f"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.402790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpkk4" event={"ID":"e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7","Type":"ContainerStarted","Data":"b7f2c35d9cd3fbdc8d95413e2f2673d74b7b2ad81a325386f75f48226bfd159d"} Jan 24 03:54:35 crc kubenswrapper[4772]: I0124 03:54:35.426569 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wpkk4" podStartSLOduration=4.500023088 podStartE2EDuration="13.426551019s" podCreationTimestamp="2026-01-24 03:54:22 +0000 UTC" firstStartedPulling="2026-01-24 03:54:22.677317664 +0000 UTC m=+759.714408389" lastFinishedPulling="2026-01-24 03:54:31.603845595 +0000 UTC m=+768.640936320" observedRunningTime="2026-01-24 03:54:35.424388826 +0000 UTC m=+772.461479551" watchObservedRunningTime="2026-01-24 03:54:35.426551019 +0000 UTC m=+772.463641744" Jan 24 03:54:36 crc kubenswrapper[4772]: I0124 03:54:36.276454 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6krl4" Jan 24 03:54:37 crc kubenswrapper[4772]: I0124 03:54:37.585943 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:37 crc kubenswrapper[4772]: I0124 03:54:37.637377 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.199251 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vs95l" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.309428 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.310299 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.313172 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-g2s2q" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.313991 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.314174 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.332684 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.463441 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtps\" (UniqueName: \"kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps\") pod \"mariadb-operator-index-2djxr\" (UID: \"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f\") " pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.564667 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtps\" (UniqueName: \"kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps\") pod \"mariadb-operator-index-2djxr\" (UID: \"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f\") " pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.584140 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.595360 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.611959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtps\" (UniqueName: \"kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps\") pod \"mariadb-operator-index-2djxr\" (UID: \"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f\") " pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.629566 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-g2s2q" Jan 24 03:54:43 crc kubenswrapper[4772]: I0124 03:54:43.637839 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:44 crc kubenswrapper[4772]: I0124 03:54:44.132185 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:44 crc kubenswrapper[4772]: W0124 03:54:44.146932 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6de1f3_d617_4944_b90e_5d0c7a3cd31f.slice/crio-a21814bddca0aae9c5ded6ec21d367b31039b9cbfc2979418bef7aa5bca4926c WatchSource:0}: Error finding container a21814bddca0aae9c5ded6ec21d367b31039b9cbfc2979418bef7aa5bca4926c: Status 404 returned error can't find the container with id a21814bddca0aae9c5ded6ec21d367b31039b9cbfc2979418bef7aa5bca4926c Jan 24 03:54:44 crc kubenswrapper[4772]: I0124 03:54:44.474235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2djxr" event={"ID":"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f","Type":"ContainerStarted","Data":"a21814bddca0aae9c5ded6ec21d367b31039b9cbfc2979418bef7aa5bca4926c"} Jan 24 03:54:46 crc kubenswrapper[4772]: I0124 03:54:46.495246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2djxr" event={"ID":"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f","Type":"ContainerStarted","Data":"d52cee46a25e2f874acbf79f3f9e33ede53df5048128508e87bd227caee0e392"} Jan 24 03:54:46 crc kubenswrapper[4772]: I0124 03:54:46.497668 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:46 crc kubenswrapper[4772]: I0124 03:54:46.900584 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:54:46 crc kubenswrapper[4772]: I0124 03:54:46.900989 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.100869 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-2djxr" podStartSLOduration=2.434837327 podStartE2EDuration="4.100834322s" podCreationTimestamp="2026-01-24 03:54:43 +0000 UTC" firstStartedPulling="2026-01-24 03:54:44.152164016 +0000 UTC m=+781.189254741" lastFinishedPulling="2026-01-24 03:54:45.818161001 +0000 UTC m=+782.855251736" observedRunningTime="2026-01-24 03:54:46.525244039 +0000 UTC m=+783.562334794" watchObservedRunningTime="2026-01-24 03:54:47.100834322 +0000 UTC m=+784.137925077" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.102625 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.106072 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.117686 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.240833 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktbv\" (UniqueName: \"kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv\") pod \"mariadb-operator-index-svwvf\" (UID: \"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5\") " pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.341995 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktbv\" (UniqueName: \"kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv\") pod \"mariadb-operator-index-svwvf\" (UID: \"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5\") " pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.377795 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktbv\" (UniqueName: \"kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv\") pod \"mariadb-operator-index-svwvf\" (UID: \"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5\") " pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.435687 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:47 crc kubenswrapper[4772]: I0124 03:54:47.741249 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 03:54:48 crc kubenswrapper[4772]: I0124 03:54:48.520783 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-svwvf" event={"ID":"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5","Type":"ContainerStarted","Data":"9d16348e6f86c1151d231ecaff3ee5d2e1e5fc0a448445e82f8e4bc3ef2ea5ec"} Jan 24 03:54:48 crc kubenswrapper[4772]: I0124 03:54:48.520824 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-2djxr" podUID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" containerName="registry-server" containerID="cri-o://d52cee46a25e2f874acbf79f3f9e33ede53df5048128508e87bd227caee0e392" gracePeriod=2 Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.539160 4772 generic.go:334] "Generic (PLEG): container finished" podID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" containerID="d52cee46a25e2f874acbf79f3f9e33ede53df5048128508e87bd227caee0e392" exitCode=0 Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.539297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2djxr" event={"ID":"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f","Type":"ContainerDied","Data":"d52cee46a25e2f874acbf79f3f9e33ede53df5048128508e87bd227caee0e392"} Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.541460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-svwvf" event={"ID":"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5","Type":"ContainerStarted","Data":"03e491b4775f254e16a95633cf991c8c272d23ccb1ced24816b76806cbf27fc5"} Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.560411 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-svwvf" podStartSLOduration=3.076983613 podStartE2EDuration="3.560395242s" podCreationTimestamp="2026-01-24 03:54:47 +0000 UTC" firstStartedPulling="2026-01-24 03:54:47.758670566 +0000 UTC m=+784.795761301" lastFinishedPulling="2026-01-24 03:54:48.242082175 +0000 UTC m=+785.279172930" observedRunningTime="2026-01-24 03:54:50.5599993 +0000 UTC m=+787.597090025" watchObservedRunningTime="2026-01-24 03:54:50.560395242 +0000 UTC m=+787.597485967" Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.774661 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.908559 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtps\" (UniqueName: \"kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps\") pod \"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f\" (UID: \"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f\") " Jan 24 03:54:50 crc kubenswrapper[4772]: I0124 03:54:50.916810 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps" (OuterVolumeSpecName: "kube-api-access-wbtps") pod "2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" (UID: "2b6de1f3-d617-4944-b90e-5d0c7a3cd31f"). InnerVolumeSpecName "kube-api-access-wbtps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.010845 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbtps\" (UniqueName: \"kubernetes.io/projected/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f-kube-api-access-wbtps\") on node \"crc\" DevicePath \"\"" Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.551215 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2djxr" event={"ID":"2b6de1f3-d617-4944-b90e-5d0c7a3cd31f","Type":"ContainerDied","Data":"a21814bddca0aae9c5ded6ec21d367b31039b9cbfc2979418bef7aa5bca4926c"} Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.551249 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2djxr" Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.551306 4772 scope.go:117] "RemoveContainer" containerID="d52cee46a25e2f874acbf79f3f9e33ede53df5048128508e87bd227caee0e392" Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.599494 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.604066 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-2djxr"] Jan 24 03:54:51 crc kubenswrapper[4772]: I0124 03:54:51.670149 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" path="/var/lib/kubelet/pods/2b6de1f3-d617-4944-b90e-5d0c7a3cd31f/volumes" Jan 24 03:54:52 crc kubenswrapper[4772]: I0124 03:54:52.587697 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wpkk4" Jan 24 03:54:57 crc kubenswrapper[4772]: I0124 03:54:57.436969 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:57 crc kubenswrapper[4772]: I0124 03:54:57.437656 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:57 crc kubenswrapper[4772]: I0124 03:54:57.484695 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:54:57 crc kubenswrapper[4772]: I0124 03:54:57.634799 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.300472 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5"] Jan 24 03:55:03 crc kubenswrapper[4772]: E0124 03:55:03.301083 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" containerName="registry-server" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.301100 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" containerName="registry-server" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.301240 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6de1f3-d617-4944-b90e-5d0c7a3cd31f" containerName="registry-server" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.302272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.309900 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzk78" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.315298 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5"] Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.489640 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.489707 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.489728 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nl58\" (UniqueName: \"kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.590462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.590539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.590559 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nl58\" (UniqueName: \"kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.591191 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.591969 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.625825 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nl58\" (UniqueName: \"kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:03 crc kubenswrapper[4772]: I0124 03:55:03.626329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:04 crc kubenswrapper[4772]: I0124 03:55:04.021089 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5"] Jan 24 03:55:04 crc kubenswrapper[4772]: W0124 03:55:04.021652 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ebaf797_e602_47ea_a5e4_24fd2f2ecf01.slice/crio-40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44 WatchSource:0}: Error finding container 40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44: Status 404 returned error can't find the container with id 40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44 Jan 24 03:55:04 crc kubenswrapper[4772]: I0124 03:55:04.643658 4772 generic.go:334] "Generic (PLEG): container finished" podID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerID="cdc8e806f759ca65948eab0dcd9ee9a0de936b5e8a5b048cbd6b44502a5f9556" exitCode=0 Jan 24 03:55:04 crc kubenswrapper[4772]: I0124 03:55:04.643788 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerDied","Data":"cdc8e806f759ca65948eab0dcd9ee9a0de936b5e8a5b048cbd6b44502a5f9556"} Jan 24 03:55:04 crc kubenswrapper[4772]: I0124 03:55:04.645918 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerStarted","Data":"40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44"} Jan 24 03:55:05 crc kubenswrapper[4772]: I0124 03:55:05.655515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerStarted","Data":"7e1e658d856b2373560d6c31444ed5489296e4a20aae4a797c8e832319e7b834"} Jan 24 03:55:06 crc kubenswrapper[4772]: I0124 03:55:06.667619 4772 generic.go:334] "Generic (PLEG): container finished" podID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerID="7e1e658d856b2373560d6c31444ed5489296e4a20aae4a797c8e832319e7b834" exitCode=0 Jan 24 03:55:06 crc kubenswrapper[4772]: I0124 03:55:06.667793 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerDied","Data":"7e1e658d856b2373560d6c31444ed5489296e4a20aae4a797c8e832319e7b834"} Jan 24 03:55:07 crc kubenswrapper[4772]: I0124 03:55:07.675608 4772 generic.go:334] "Generic (PLEG): container finished" podID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerID="c483479fe58315314f7b596ae6eeb6ebfabed30bd05f74ab4cf93a16f45b4b77" exitCode=0 Jan 24 03:55:07 crc kubenswrapper[4772]: I0124 03:55:07.675693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerDied","Data":"c483479fe58315314f7b596ae6eeb6ebfabed30bd05f74ab4cf93a16f45b4b77"} Jan 24 03:55:08 crc kubenswrapper[4772]: I0124 03:55:08.969136 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.165320 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle\") pod \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.165970 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util\") pod \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.166056 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nl58\" (UniqueName: \"kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58\") pod \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\" (UID: \"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01\") " Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.168102 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle" (OuterVolumeSpecName: "bundle") pod "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" (UID: "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.173890 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58" (OuterVolumeSpecName: "kube-api-access-2nl58") pod "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" (UID: "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01"). InnerVolumeSpecName "kube-api-access-2nl58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.185900 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util" (OuterVolumeSpecName: "util") pod "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" (UID: "3ebaf797-e602-47ea-a5e4-24fd2f2ecf01"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.267843 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.267925 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.267941 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nl58\" (UniqueName: \"kubernetes.io/projected/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01-kube-api-access-2nl58\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.697936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" event={"ID":"3ebaf797-e602-47ea-a5e4-24fd2f2ecf01","Type":"ContainerDied","Data":"40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44"} Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.698019 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40161c7436eb94e818f0ec37ca12ae9658b77e3ba22754b574f8575cdab33f44" Jan 24 03:55:09 crc kubenswrapper[4772]: I0124 03:55:09.698096 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.452551 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 03:55:16 crc kubenswrapper[4772]: E0124 03:55:16.453327 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="util" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.453343 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="util" Jan 24 03:55:16 crc kubenswrapper[4772]: E0124 03:55:16.453358 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="extract" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.453366 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="extract" Jan 24 03:55:16 crc kubenswrapper[4772]: E0124 03:55:16.453374 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="pull" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.453383 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="pull" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.453507 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" containerName="extract" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.454102 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.462695 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4tcpb" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.464510 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.469206 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct25p\" (UniqueName: \"kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.469271 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.469383 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.470229 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.470662 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.570686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.570810 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct25p\" (UniqueName: \"kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.570855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.577178 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.580013 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.590640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct25p\" (UniqueName: \"kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p\") pod \"mariadb-operator-controller-manager-55c49c975d-nch2v\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.773417 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.900212 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:55:16 crc kubenswrapper[4772]: I0124 03:55:16.900270 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:55:17 crc kubenswrapper[4772]: I0124 03:55:17.009704 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 03:55:17 crc kubenswrapper[4772]: I0124 03:55:17.762238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" event={"ID":"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0","Type":"ContainerStarted","Data":"ff0a4f2debe80ecb6faaa89c818681af8eee704220113d9dd015a80feb898c0d"} Jan 24 03:55:21 crc kubenswrapper[4772]: I0124 03:55:21.800374 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" event={"ID":"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0","Type":"ContainerStarted","Data":"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c"} Jan 24 03:55:21 crc kubenswrapper[4772]: I0124 03:55:21.801490 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:21 crc kubenswrapper[4772]: I0124 03:55:21.826654 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" podStartSLOduration=1.650543168 podStartE2EDuration="5.826613474s" podCreationTimestamp="2026-01-24 03:55:16 +0000 UTC" firstStartedPulling="2026-01-24 03:55:17.028079751 +0000 UTC m=+814.065170496" lastFinishedPulling="2026-01-24 03:55:21.204150067 +0000 UTC m=+818.241240802" observedRunningTime="2026-01-24 03:55:21.82614642 +0000 UTC m=+818.863237215" watchObservedRunningTime="2026-01-24 03:55:21.826613474 +0000 UTC m=+818.863704239" Jan 24 03:55:26 crc kubenswrapper[4772]: I0124 03:55:26.779617 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 03:55:31 crc kubenswrapper[4772]: I0124 03:55:31.789272 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 03:55:31 crc kubenswrapper[4772]: I0124 03:55:31.790280 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:31 crc kubenswrapper[4772]: I0124 03:55:31.793263 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-xjkmw" Jan 24 03:55:31 crc kubenswrapper[4772]: I0124 03:55:31.809320 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 03:55:31 crc kubenswrapper[4772]: I0124 03:55:31.900666 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5nfs\" (UniqueName: \"kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs\") pod \"infra-operator-index-fgqf6\" (UID: \"5b8ea70a-6c94-4ff6-878b-9e932a7251a5\") " pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:32 crc kubenswrapper[4772]: I0124 03:55:32.002102 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5nfs\" (UniqueName: \"kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs\") pod \"infra-operator-index-fgqf6\" (UID: \"5b8ea70a-6c94-4ff6-878b-9e932a7251a5\") " pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:32 crc kubenswrapper[4772]: I0124 03:55:32.028279 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5nfs\" (UniqueName: \"kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs\") pod \"infra-operator-index-fgqf6\" (UID: \"5b8ea70a-6c94-4ff6-878b-9e932a7251a5\") " pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:32 crc kubenswrapper[4772]: I0124 03:55:32.111169 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:32 crc kubenswrapper[4772]: I0124 03:55:32.517944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 03:55:32 crc kubenswrapper[4772]: W0124 03:55:32.525981 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b8ea70a_6c94_4ff6_878b_9e932a7251a5.slice/crio-f4f5aed054d1635251d4918745b91055973bdd876902e95636d5b5ed1c118442 WatchSource:0}: Error finding container f4f5aed054d1635251d4918745b91055973bdd876902e95636d5b5ed1c118442: Status 404 returned error can't find the container with id f4f5aed054d1635251d4918745b91055973bdd876902e95636d5b5ed1c118442 Jan 24 03:55:32 crc kubenswrapper[4772]: I0124 03:55:32.881204 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fgqf6" event={"ID":"5b8ea70a-6c94-4ff6-878b-9e932a7251a5","Type":"ContainerStarted","Data":"f4f5aed054d1635251d4918745b91055973bdd876902e95636d5b5ed1c118442"} Jan 24 03:55:33 crc kubenswrapper[4772]: I0124 03:55:33.889453 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fgqf6" event={"ID":"5b8ea70a-6c94-4ff6-878b-9e932a7251a5","Type":"ContainerStarted","Data":"532cac6a7bc8c117fc977167e447595e155c4b6cb04b9252deb628f76e0ef987"} Jan 24 03:55:33 crc kubenswrapper[4772]: I0124 03:55:33.911646 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-fgqf6" podStartSLOduration=2.022851544 podStartE2EDuration="2.911623483s" podCreationTimestamp="2026-01-24 03:55:31 +0000 UTC" firstStartedPulling="2026-01-24 03:55:32.527161425 +0000 UTC m=+829.564252150" lastFinishedPulling="2026-01-24 03:55:33.415933364 +0000 UTC m=+830.453024089" observedRunningTime="2026-01-24 03:55:33.908523652 +0000 UTC m=+830.945614377" watchObservedRunningTime="2026-01-24 03:55:33.911623483 +0000 UTC m=+830.948714248" Jan 24 03:55:42 crc kubenswrapper[4772]: I0124 03:55:42.112097 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:42 crc kubenswrapper[4772]: I0124 03:55:42.114002 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:42 crc kubenswrapper[4772]: I0124 03:55:42.168100 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:42 crc kubenswrapper[4772]: I0124 03:55:42.999273 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 03:55:46 crc kubenswrapper[4772]: I0124 03:55:46.900363 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:55:46 crc kubenswrapper[4772]: I0124 03:55:46.900456 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:55:46 crc kubenswrapper[4772]: I0124 03:55:46.900520 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:55:46 crc kubenswrapper[4772]: I0124 03:55:46.901365 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 03:55:46 crc kubenswrapper[4772]: I0124 03:55:46.901459 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003" gracePeriod=600 Jan 24 03:55:48 crc kubenswrapper[4772]: I0124 03:55:48.012986 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003" exitCode=0 Jan 24 03:55:48 crc kubenswrapper[4772]: I0124 03:55:48.013100 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003"} Jan 24 03:55:48 crc kubenswrapper[4772]: I0124 03:55:48.013412 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d"} Jan 24 03:55:48 crc kubenswrapper[4772]: I0124 03:55:48.013442 4772 scope.go:117] "RemoveContainer" containerID="3a1531a803689441abefc671ffc29f346d56f44c9d3bb0f57c687ae2188e6f75" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.446159 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv"] Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.449225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.452331 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzk78" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.461762 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv"] Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.561938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.562022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.562148 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.663382 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.663492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.663561 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.664539 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.664817 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.695347 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp\") pod \"5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:49 crc kubenswrapper[4772]: I0124 03:55:49.796978 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:50 crc kubenswrapper[4772]: I0124 03:55:50.137074 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv"] Jan 24 03:55:50 crc kubenswrapper[4772]: W0124 03:55:50.144768 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950be24a_370e_4a36_9f60_3342e339e1c6.slice/crio-5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea WatchSource:0}: Error finding container 5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea: Status 404 returned error can't find the container with id 5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea Jan 24 03:55:51 crc kubenswrapper[4772]: I0124 03:55:51.044493 4772 generic.go:334] "Generic (PLEG): container finished" podID="950be24a-370e-4a36-9f60-3342e339e1c6" containerID="11630e9af61f3b886d721f8a4f50af700ddb3cea53ef4e843beef0eefdd613dc" exitCode=0 Jan 24 03:55:51 crc kubenswrapper[4772]: I0124 03:55:51.044542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" event={"ID":"950be24a-370e-4a36-9f60-3342e339e1c6","Type":"ContainerDied","Data":"11630e9af61f3b886d721f8a4f50af700ddb3cea53ef4e843beef0eefdd613dc"} Jan 24 03:55:51 crc kubenswrapper[4772]: I0124 03:55:51.044570 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" event={"ID":"950be24a-370e-4a36-9f60-3342e339e1c6","Type":"ContainerStarted","Data":"5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea"} Jan 24 03:55:53 crc kubenswrapper[4772]: I0124 03:55:53.061549 4772 generic.go:334] "Generic (PLEG): container finished" podID="950be24a-370e-4a36-9f60-3342e339e1c6" containerID="bcf5b0511ebdc9c2a965bb7b3acbd316497221138cb98647681e56d6af033f10" exitCode=0 Jan 24 03:55:53 crc kubenswrapper[4772]: I0124 03:55:53.061633 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" event={"ID":"950be24a-370e-4a36-9f60-3342e339e1c6","Type":"ContainerDied","Data":"bcf5b0511ebdc9c2a965bb7b3acbd316497221138cb98647681e56d6af033f10"} Jan 24 03:55:54 crc kubenswrapper[4772]: I0124 03:55:54.079632 4772 generic.go:334] "Generic (PLEG): container finished" podID="950be24a-370e-4a36-9f60-3342e339e1c6" containerID="ad1b4cfb6c39f8ad5d43d3450f958e07049dc0e1015d61c53f8fe225011492e3" exitCode=0 Jan 24 03:55:54 crc kubenswrapper[4772]: I0124 03:55:54.079692 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" event={"ID":"950be24a-370e-4a36-9f60-3342e339e1c6","Type":"ContainerDied","Data":"ad1b4cfb6c39f8ad5d43d3450f958e07049dc0e1015d61c53f8fe225011492e3"} Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.420862 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.468352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle\") pod \"950be24a-370e-4a36-9f60-3342e339e1c6\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.468435 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util\") pod \"950be24a-370e-4a36-9f60-3342e339e1c6\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.468508 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp\") pod \"950be24a-370e-4a36-9f60-3342e339e1c6\" (UID: \"950be24a-370e-4a36-9f60-3342e339e1c6\") " Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.472952 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle" (OuterVolumeSpecName: "bundle") pod "950be24a-370e-4a36-9f60-3342e339e1c6" (UID: "950be24a-370e-4a36-9f60-3342e339e1c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.477115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp" (OuterVolumeSpecName: "kube-api-access-vxqbp") pod "950be24a-370e-4a36-9f60-3342e339e1c6" (UID: "950be24a-370e-4a36-9f60-3342e339e1c6"). InnerVolumeSpecName "kube-api-access-vxqbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.494980 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util" (OuterVolumeSpecName: "util") pod "950be24a-370e-4a36-9f60-3342e339e1c6" (UID: "950be24a-370e-4a36-9f60-3342e339e1c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.570635 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/950be24a-370e-4a36-9f60-3342e339e1c6-kube-api-access-vxqbp\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.570685 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:55 crc kubenswrapper[4772]: I0124 03:55:55.570701 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950be24a-370e-4a36-9f60-3342e339e1c6-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:55:56 crc kubenswrapper[4772]: I0124 03:55:56.097552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" event={"ID":"950be24a-370e-4a36-9f60-3342e339e1c6","Type":"ContainerDied","Data":"5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea"} Jan 24 03:55:56 crc kubenswrapper[4772]: I0124 03:55:56.097594 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5274eb9110966c637bae2499b72da55d41945a6d9dbed5e1e88f5898946c12ea" Jan 24 03:55:56 crc kubenswrapper[4772]: I0124 03:55:56.097611 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.108211 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 03:56:03 crc kubenswrapper[4772]: E0124 03:56:03.108938 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="pull" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.108955 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="pull" Jan 24 03:56:03 crc kubenswrapper[4772]: E0124 03:56:03.108972 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="util" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.108979 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="util" Jan 24 03:56:03 crc kubenswrapper[4772]: E0124 03:56:03.108993 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="extract" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.109001 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="extract" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.109105 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" containerName="extract" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.109726 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.112221 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openshift-service-ca.crt" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.112332 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"kube-root-ca.crt" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.112342 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openstack-scripts" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.113160 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"galera-openstack-dockercfg-xlz4p" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.113386 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openstack-config-data" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.120924 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.126172 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.127488 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.130654 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.131617 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.149167 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.163748 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199827 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h446z\" (UniqueName: \"kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzwx\" (UniqueName: \"kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199890 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199948 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199969 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.199999 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200030 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200057 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200080 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200103 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200137 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200160 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp7s\" (UniqueName: \"kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200340 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200365 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.200381 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301391 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301446 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301465 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301529 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301571 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp7s\" (UniqueName: \"kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301610 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301625 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h446z\" (UniqueName: \"kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tzwx\" (UniqueName: \"kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301722 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301755 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301773 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.301787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.302435 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") device mount path \"/mnt/openstack/pv07\"" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.302561 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.302870 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.302958 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303012 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303030 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") device mount path \"/mnt/openstack/pv08\"" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303414 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303440 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") device mount path \"/mnt/openstack/pv03\"" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303591 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.303879 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.304602 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.322207 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.322327 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.324021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.327100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.327402 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.328503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.328580 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp7s\" (UniqueName: \"kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.333486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tzwx\" (UniqueName: \"kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx\") pod \"openstack-galera-0\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.344839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.357583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h446z\" (UniqueName: \"kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z\") pod \"openstack-galera-1\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.426648 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.441050 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.452778 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.697396 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.770174 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 03:56:03 crc kubenswrapper[4772]: W0124 03:56:03.773866 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod371de15b_9f9b_445c_afa1_eea50501d846.slice/crio-d0f1a9d36757627fc9dd235a0490ee993ae77e9bbc2d70312f6b6b713f0c4f22 WatchSource:0}: Error finding container d0f1a9d36757627fc9dd235a0490ee993ae77e9bbc2d70312f6b6b713f0c4f22: Status 404 returned error can't find the container with id d0f1a9d36757627fc9dd235a0490ee993ae77e9bbc2d70312f6b6b713f0c4f22 Jan 24 03:56:03 crc kubenswrapper[4772]: I0124 03:56:03.809480 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 03:56:03 crc kubenswrapper[4772]: W0124 03:56:03.814588 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2f9a52_48db_4361_b4d5_7f9ea2d6678a.slice/crio-5eaef4de0bb701457801f4c5b647f5a49a645c3b8b5e91e5032783615cd01980 WatchSource:0}: Error finding container 5eaef4de0bb701457801f4c5b647f5a49a645c3b8b5e91e5032783615cd01980: Status 404 returned error can't find the container with id 5eaef4de0bb701457801f4c5b647f5a49a645c3b8b5e91e5032783615cd01980 Jan 24 03:56:04 crc kubenswrapper[4772]: I0124 03:56:04.166167 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerStarted","Data":"d0f1a9d36757627fc9dd235a0490ee993ae77e9bbc2d70312f6b6b713f0c4f22"} Jan 24 03:56:04 crc kubenswrapper[4772]: I0124 03:56:04.167978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerStarted","Data":"f942cd6c5b2678c423eae0f5b7a98ffc24c3bfc2016d3db12751c828dad8dab1"} Jan 24 03:56:04 crc kubenswrapper[4772]: I0124 03:56:04.169021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerStarted","Data":"5eaef4de0bb701457801f4c5b647f5a49a645c3b8b5e91e5032783615cd01980"} Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.092773 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.093547 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.097355 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.101929 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gx596" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.130000 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.130160 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.130513 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795ng\" (UniqueName: \"kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.130646 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.233286 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-795ng\" (UniqueName: \"kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.233345 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.235112 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.244896 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.246338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.256105 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-795ng\" (UniqueName: \"kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng\") pod \"infra-operator-controller-manager-568c7bc546-wh4qx\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.438009 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:05 crc kubenswrapper[4772]: I0124 03:56:05.951808 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 03:56:05 crc kubenswrapper[4772]: W0124 03:56:05.967483 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd30b7e_5452_48b0_9d24_9ac6dba05f43.slice/crio-ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e WatchSource:0}: Error finding container ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e: Status 404 returned error can't find the container with id ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e Jan 24 03:56:06 crc kubenswrapper[4772]: I0124 03:56:06.190022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" event={"ID":"add30b7e-5452-48b0-9d24-9ac6dba05f43","Type":"ContainerStarted","Data":"ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e"} Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.576160 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.577754 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.598176 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.615089 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsd4\" (UniqueName: \"kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.615354 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.615473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.716326 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.716422 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.716530 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsd4\" (UniqueName: \"kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.716801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.717282 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.739555 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsd4\" (UniqueName: \"kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4\") pod \"community-operators-nbcmx\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:09 crc kubenswrapper[4772]: I0124 03:56:09.915151 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:15 crc kubenswrapper[4772]: I0124 03:56:15.919414 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:15 crc kubenswrapper[4772]: W0124 03:56:15.927519 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b3b846_6b6b_4d91_a0e7_a8d2fdbd7469.slice/crio-9328484aa26d2ca0567ca887b72239a878436167872ab645c1c370c18ea00d93 WatchSource:0}: Error finding container 9328484aa26d2ca0567ca887b72239a878436167872ab645c1c370c18ea00d93: Status 404 returned error can't find the container with id 9328484aa26d2ca0567ca887b72239a878436167872ab645c1c370c18ea00d93 Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.265294 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" event={"ID":"add30b7e-5452-48b0-9d24-9ac6dba05f43","Type":"ContainerStarted","Data":"ecb2dc2d8df2a2d975ccdb996da9e2b1056856c51e5313ade94d2f56df741b6c"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.266413 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.267437 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerStarted","Data":"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.268775 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerStarted","Data":"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.270320 4772 generic.go:334] "Generic (PLEG): container finished" podID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerID="7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14" exitCode=0 Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.270364 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerDied","Data":"7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.270379 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerStarted","Data":"9328484aa26d2ca0567ca887b72239a878436167872ab645c1c370c18ea00d93"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.279561 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerStarted","Data":"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9"} Jan 24 03:56:16 crc kubenswrapper[4772]: I0124 03:56:16.290763 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" podStartSLOduration=1.771421969 podStartE2EDuration="11.290748071s" podCreationTimestamp="2026-01-24 03:56:05 +0000 UTC" firstStartedPulling="2026-01-24 03:56:05.978505319 +0000 UTC m=+863.015596044" lastFinishedPulling="2026-01-24 03:56:15.497831421 +0000 UTC m=+872.534922146" observedRunningTime="2026-01-24 03:56:16.288510207 +0000 UTC m=+873.325600932" watchObservedRunningTime="2026-01-24 03:56:16.290748071 +0000 UTC m=+873.327838796" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.188418 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.189833 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.199287 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.289214 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerStarted","Data":"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1"} Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.340453 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.340517 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjscn\" (UniqueName: \"kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.340837 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.445584 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.446129 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.446176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjscn\" (UniqueName: \"kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.448909 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.448963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.470428 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjscn\" (UniqueName: \"kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn\") pod \"certified-operators-945xp\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.512201 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:17 crc kubenswrapper[4772]: I0124 03:56:17.984360 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:17 crc kubenswrapper[4772]: W0124 03:56:17.994924 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3887c6b1_94e5_4f41_af9e_a4cd0eebadf3.slice/crio-ff87bf54703d66de2cf1f5ef3a5f0c3e179b4712fcf344e66dca2347a0ff1a66 WatchSource:0}: Error finding container ff87bf54703d66de2cf1f5ef3a5f0c3e179b4712fcf344e66dca2347a0ff1a66: Status 404 returned error can't find the container with id ff87bf54703d66de2cf1f5ef3a5f0c3e179b4712fcf344e66dca2347a0ff1a66 Jan 24 03:56:18 crc kubenswrapper[4772]: I0124 03:56:18.296271 4772 generic.go:334] "Generic (PLEG): container finished" podID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerID="16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1" exitCode=0 Jan 24 03:56:18 crc kubenswrapper[4772]: I0124 03:56:18.296373 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerDied","Data":"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1"} Jan 24 03:56:18 crc kubenswrapper[4772]: I0124 03:56:18.298002 4772 generic.go:334] "Generic (PLEG): container finished" podID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerID="f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122" exitCode=0 Jan 24 03:56:18 crc kubenswrapper[4772]: I0124 03:56:18.298080 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerDied","Data":"f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122"} Jan 24 03:56:18 crc kubenswrapper[4772]: I0124 03:56:18.298124 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerStarted","Data":"ff87bf54703d66de2cf1f5ef3a5f0c3e179b4712fcf344e66dca2347a0ff1a66"} Jan 24 03:56:19 crc kubenswrapper[4772]: I0124 03:56:19.309039 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerStarted","Data":"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8"} Jan 24 03:56:19 crc kubenswrapper[4772]: I0124 03:56:19.310820 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerStarted","Data":"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b"} Jan 24 03:56:19 crc kubenswrapper[4772]: I0124 03:56:19.337654 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbcmx" podStartSLOduration=7.923246605 podStartE2EDuration="10.337636418s" podCreationTimestamp="2026-01-24 03:56:09 +0000 UTC" firstStartedPulling="2026-01-24 03:56:16.271577705 +0000 UTC m=+873.308668430" lastFinishedPulling="2026-01-24 03:56:18.685967518 +0000 UTC m=+875.723058243" observedRunningTime="2026-01-24 03:56:19.332018668 +0000 UTC m=+876.369109393" watchObservedRunningTime="2026-01-24 03:56:19.337636418 +0000 UTC m=+876.374727143" Jan 24 03:56:19 crc kubenswrapper[4772]: I0124 03:56:19.916310 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:19 crc kubenswrapper[4772]: I0124 03:56:19.916636 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.323159 4772 generic.go:334] "Generic (PLEG): container finished" podID="371de15b-9f9b-445c-afa1-eea50501d846" containerID="7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529" exitCode=0 Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.323325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerDied","Data":"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529"} Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.326011 4772 generic.go:334] "Generic (PLEG): container finished" podID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerID="aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e" exitCode=0 Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.326185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerDied","Data":"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e"} Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.331361 4772 generic.go:334] "Generic (PLEG): container finished" podID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerID="8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9" exitCode=0 Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.331467 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerDied","Data":"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9"} Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.336020 4772 generic.go:334] "Generic (PLEG): container finished" podID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerID="ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b" exitCode=0 Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.339343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerDied","Data":"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b"} Jan 24 03:56:20 crc kubenswrapper[4772]: I0124 03:56:20.962326 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nbcmx" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="registry-server" probeResult="failure" output=< Jan 24 03:56:20 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 03:56:20 crc kubenswrapper[4772]: > Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.344800 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerStarted","Data":"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0"} Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.346676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerStarted","Data":"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b"} Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.349572 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerStarted","Data":"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e"} Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.351235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerStarted","Data":"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852"} Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.367239 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-2" podStartSLOduration=7.500201275 podStartE2EDuration="19.367214411s" podCreationTimestamp="2026-01-24 03:56:02 +0000 UTC" firstStartedPulling="2026-01-24 03:56:03.7068946 +0000 UTC m=+860.743985335" lastFinishedPulling="2026-01-24 03:56:15.573907726 +0000 UTC m=+872.610998471" observedRunningTime="2026-01-24 03:56:21.362208389 +0000 UTC m=+878.399299134" watchObservedRunningTime="2026-01-24 03:56:21.367214411 +0000 UTC m=+878.404305156" Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.383984 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-945xp" podStartSLOduration=1.881568153 podStartE2EDuration="4.383960658s" podCreationTimestamp="2026-01-24 03:56:17 +0000 UTC" firstStartedPulling="2026-01-24 03:56:18.299727989 +0000 UTC m=+875.336818714" lastFinishedPulling="2026-01-24 03:56:20.802120484 +0000 UTC m=+877.839211219" observedRunningTime="2026-01-24 03:56:21.38263808 +0000 UTC m=+878.419728815" watchObservedRunningTime="2026-01-24 03:56:21.383960658 +0000 UTC m=+878.421051403" Jan 24 03:56:21 crc kubenswrapper[4772]: I0124 03:56:21.403109 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-0" podStartSLOduration=7.629800663 podStartE2EDuration="19.403094952s" podCreationTimestamp="2026-01-24 03:56:02 +0000 UTC" firstStartedPulling="2026-01-24 03:56:03.777825988 +0000 UTC m=+860.814916713" lastFinishedPulling="2026-01-24 03:56:15.551120277 +0000 UTC m=+872.588211002" observedRunningTime="2026-01-24 03:56:21.402239968 +0000 UTC m=+878.439330693" watchObservedRunningTime="2026-01-24 03:56:21.403094952 +0000 UTC m=+878.440185677" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.427026 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.427862 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.441283 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.441587 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.453961 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:23 crc kubenswrapper[4772]: I0124 03:56:23.454499 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:25 crc kubenswrapper[4772]: I0124 03:56:25.444755 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 03:56:25 crc kubenswrapper[4772]: I0124 03:56:25.463929 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-1" podStartSLOduration=11.727569178 podStartE2EDuration="23.463912446s" podCreationTimestamp="2026-01-24 03:56:02 +0000 UTC" firstStartedPulling="2026-01-24 03:56:03.816874799 +0000 UTC m=+860.853965524" lastFinishedPulling="2026-01-24 03:56:15.553218057 +0000 UTC m=+872.590308792" observedRunningTime="2026-01-24 03:56:21.433209099 +0000 UTC m=+878.470299824" watchObservedRunningTime="2026-01-24 03:56:25.463912446 +0000 UTC m=+882.501003171" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.415414 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.416194 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.419498 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"memcached-config-data" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.419748 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"memcached-memcached-dockercfg-vvzvb" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.433812 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.500198 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.500249 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqb6p\" (UniqueName: \"kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.500282 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.602162 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.602384 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqb6p\" (UniqueName: \"kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.602501 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.603185 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.603510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.629784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqb6p\" (UniqueName: \"kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p\") pod \"memcached-0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:26 crc kubenswrapper[4772]: I0124 03:56:26.739555 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.408949 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.512870 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.512930 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.555347 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.579943 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.580930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.583269 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-b9ds2" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.599837 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.718451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5229g\" (UniqueName: \"kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g\") pod \"rabbitmq-cluster-operator-index-hhzdk\" (UID: \"821f2a99-5a55-4043-9b0a-ec9197151c46\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.819588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5229g\" (UniqueName: \"kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g\") pod \"rabbitmq-cluster-operator-index-hhzdk\" (UID: \"821f2a99-5a55-4043-9b0a-ec9197151c46\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.839985 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5229g\" (UniqueName: \"kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g\") pod \"rabbitmq-cluster-operator-index-hhzdk\" (UID: \"821f2a99-5a55-4043-9b0a-ec9197151c46\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:27 crc kubenswrapper[4772]: I0124 03:56:27.967445 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:28 crc kubenswrapper[4772]: I0124 03:56:28.271684 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:28 crc kubenswrapper[4772]: I0124 03:56:28.407392 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" event={"ID":"821f2a99-5a55-4043-9b0a-ec9197151c46","Type":"ContainerStarted","Data":"dea86bc9abb57ccde359ba7442e4302245cd2ee4fe7b329306900fa7150044a9"} Jan 24 03:56:28 crc kubenswrapper[4772]: I0124 03:56:28.408678 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"f2831b28-9004-4323-a39f-0a43d7cbb6c0","Type":"ContainerStarted","Data":"037cf6c502ad0fd7a2a180c61607c181d4afa33118a141dea381537f62338f73"} Jan 24 03:56:28 crc kubenswrapper[4772]: I0124 03:56:28.452158 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:29 crc kubenswrapper[4772]: I0124 03:56:29.987547 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:30 crc kubenswrapper[4772]: I0124 03:56:30.044212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:31 crc kubenswrapper[4772]: I0124 03:56:31.582676 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:31 crc kubenswrapper[4772]: I0124 03:56:31.668048 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.145336 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/root-account-create-update-k8xp2"] Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.146182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.148328 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.157415 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-k8xp2"] Jan 24 03:56:32 crc kubenswrapper[4772]: E0124 03:56:32.256311 4772 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.12:42892->38.102.83.12:38643: write tcp 38.102.83.12:42892->38.102.83.12:38643: write: broken pipe Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.281632 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.281724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8994\" (UniqueName: \"kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.371214 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.371526 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-945xp" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="registry-server" containerID="cri-o://50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e" gracePeriod=2 Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.383595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.383703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8994\" (UniqueName: \"kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.384608 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.418807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8994\" (UniqueName: \"kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994\") pod \"root-account-create-update-k8xp2\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.470267 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.843212 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.890041 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities\") pod \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.891680 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjscn\" (UniqueName: \"kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn\") pod \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.891518 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities" (OuterVolumeSpecName: "utilities") pod "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" (UID: "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.893073 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content\") pod \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\" (UID: \"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3\") " Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.893485 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.897594 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn" (OuterVolumeSpecName: "kube-api-access-wjscn") pod "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" (UID: "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3"). InnerVolumeSpecName "kube-api-access-wjscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:56:32 crc kubenswrapper[4772]: I0124 03:56:32.987483 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.000269 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjscn\" (UniqueName: \"kubernetes.io/projected/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-kube-api-access-wjscn\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.033666 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" (UID: "3887c6b1-94e5-4f41-af9e-a4cd0eebadf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.105559 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.164697 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-k8xp2"] Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.441349 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" event={"ID":"821f2a99-5a55-4043-9b0a-ec9197151c46","Type":"ContainerStarted","Data":"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.441401 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" podUID="821f2a99-5a55-4043-9b0a-ec9197151c46" containerName="registry-server" containerID="cri-o://ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5" gracePeriod=2 Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.443250 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"f2831b28-9004-4323-a39f-0a43d7cbb6c0","Type":"ContainerStarted","Data":"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.443404 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.449251 4772 generic.go:334] "Generic (PLEG): container finished" podID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerID="50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e" exitCode=0 Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.449313 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945xp" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.449339 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerDied","Data":"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.449397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945xp" event={"ID":"3887c6b1-94e5-4f41-af9e-a4cd0eebadf3","Type":"ContainerDied","Data":"ff87bf54703d66de2cf1f5ef3a5f0c3e179b4712fcf344e66dca2347a0ff1a66"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.449418 4772 scope.go:117] "RemoveContainer" containerID="50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.452566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" event={"ID":"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9","Type":"ContainerStarted","Data":"1a3f58a1f79febf026825b7ab2c7b9bce436517c22efb037662168685743cfed"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.452625 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" event={"ID":"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9","Type":"ContainerStarted","Data":"a2376a12f1735c35a1312aea6d747084e0b9b812e7a07a41f999bb3939d2a702"} Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.496229 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" podStartSLOduration=2.010525209 podStartE2EDuration="6.496207061s" podCreationTimestamp="2026-01-24 03:56:27 +0000 UTC" firstStartedPulling="2026-01-24 03:56:28.294275332 +0000 UTC m=+885.331366057" lastFinishedPulling="2026-01-24 03:56:32.779957184 +0000 UTC m=+889.817047909" observedRunningTime="2026-01-24 03:56:33.47016611 +0000 UTC m=+890.507256835" watchObservedRunningTime="2026-01-24 03:56:33.496207061 +0000 UTC m=+890.533297786" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.499548 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/memcached-0" podStartSLOduration=4.043259972 podStartE2EDuration="7.499538226s" podCreationTimestamp="2026-01-24 03:56:26 +0000 UTC" firstStartedPulling="2026-01-24 03:56:27.41936366 +0000 UTC m=+884.456454385" lastFinishedPulling="2026-01-24 03:56:30.875641914 +0000 UTC m=+887.912732639" observedRunningTime="2026-01-24 03:56:33.492861816 +0000 UTC m=+890.529952541" watchObservedRunningTime="2026-01-24 03:56:33.499538226 +0000 UTC m=+890.536628951" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.502558 4772 scope.go:117] "RemoveContainer" containerID="ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.520703 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" podStartSLOduration=1.520678137 podStartE2EDuration="1.520678137s" podCreationTimestamp="2026-01-24 03:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:56:33.517518408 +0000 UTC m=+890.554609123" watchObservedRunningTime="2026-01-24 03:56:33.520678137 +0000 UTC m=+890.557768862" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.551409 4772 scope.go:117] "RemoveContainer" containerID="f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.569677 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.577004 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-945xp"] Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.588583 4772 scope.go:117] "RemoveContainer" containerID="50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e" Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.589201 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e\": container with ID starting with 50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e not found: ID does not exist" containerID="50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.589233 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e"} err="failed to get container status \"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e\": rpc error: code = NotFound desc = could not find container \"50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e\": container with ID starting with 50fc003647f7ada5c809d08408352512c9ff6dd61c1d40eef112a9e1ec5a409e not found: ID does not exist" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.589256 4772 scope.go:117] "RemoveContainer" containerID="ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b" Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.590380 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b\": container with ID starting with ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b not found: ID does not exist" containerID="ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.590424 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b"} err="failed to get container status \"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b\": rpc error: code = NotFound desc = could not find container \"ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b\": container with ID starting with ca36082a0dd52b59432413e23bd424b40099b4b7c9eb9b3e06ebec984741767b not found: ID does not exist" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.590452 4772 scope.go:117] "RemoveContainer" containerID="f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122" Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.590814 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122\": container with ID starting with f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122 not found: ID does not exist" containerID="f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.590844 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122"} err="failed to get container status \"f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122\": rpc error: code = NotFound desc = could not find container \"f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122\": container with ID starting with f4ca777f559dac808ca29c3b433e609f8d0e6ac9f37c99548949e273b4b96122 not found: ID does not exist" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.670122 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" path="/var/lib/kubelet/pods/3887c6b1-94e5-4f41-af9e-a4cd0eebadf3/volumes" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.769016 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.769254 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="extract-utilities" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.769266 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="extract-utilities" Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.769285 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="registry-server" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.769291 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="registry-server" Jan 24 03:56:33 crc kubenswrapper[4772]: E0124 03:56:33.769298 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="extract-content" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.769304 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="extract-content" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.769414 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3887c6b1-94e5-4f41-af9e-a4cd0eebadf3" containerName="registry-server" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.770390 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.782103 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.818428 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfl8\" (UniqueName: \"kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8\") pod \"rabbitmq-cluster-operator-index-tbqs5\" (UID: \"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.839683 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.919771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5229g\" (UniqueName: \"kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g\") pod \"821f2a99-5a55-4043-9b0a-ec9197151c46\" (UID: \"821f2a99-5a55-4043-9b0a-ec9197151c46\") " Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.920268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfl8\" (UniqueName: \"kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8\") pod \"rabbitmq-cluster-operator-index-tbqs5\" (UID: \"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.934010 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g" (OuterVolumeSpecName: "kube-api-access-5229g") pod "821f2a99-5a55-4043-9b0a-ec9197151c46" (UID: "821f2a99-5a55-4043-9b0a-ec9197151c46"). InnerVolumeSpecName "kube-api-access-5229g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:56:33 crc kubenswrapper[4772]: I0124 03:56:33.937655 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfl8\" (UniqueName: \"kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8\") pod \"rabbitmq-cluster-operator-index-tbqs5\" (UID: \"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9\") " pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.021323 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5229g\" (UniqueName: \"kubernetes.io/projected/821f2a99-5a55-4043-9b0a-ec9197151c46-kube-api-access-5229g\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.087661 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.460725 4772 generic.go:334] "Generic (PLEG): container finished" podID="821f2a99-5a55-4043-9b0a-ec9197151c46" containerID="ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5" exitCode=0 Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.460793 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.460837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" event={"ID":"821f2a99-5a55-4043-9b0a-ec9197151c46","Type":"ContainerDied","Data":"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5"} Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.460890 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hhzdk" event={"ID":"821f2a99-5a55-4043-9b0a-ec9197151c46","Type":"ContainerDied","Data":"dea86bc9abb57ccde359ba7442e4302245cd2ee4fe7b329306900fa7150044a9"} Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.460913 4772 scope.go:117] "RemoveContainer" containerID="ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.563821 4772 scope.go:117] "RemoveContainer" containerID="ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5" Jan 24 03:56:34 crc kubenswrapper[4772]: E0124 03:56:34.564610 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5\": container with ID starting with ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5 not found: ID does not exist" containerID="ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.564649 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5"} err="failed to get container status \"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5\": rpc error: code = NotFound desc = could not find container \"ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5\": container with ID starting with ecfeffdb325797d065e86560d59a6307d78803de522711ab36c0a8fa1c2b72b5 not found: ID does not exist" Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.577771 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.586235 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hhzdk"] Jan 24 03:56:34 crc kubenswrapper[4772]: I0124 03:56:34.602570 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 03:56:35 crc kubenswrapper[4772]: I0124 03:56:35.496663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" event={"ID":"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9","Type":"ContainerStarted","Data":"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803"} Jan 24 03:56:35 crc kubenswrapper[4772]: I0124 03:56:35.497088 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" event={"ID":"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9","Type":"ContainerStarted","Data":"40d4599e3c44d7719a18a256af159230b688eeb546bff36c91ae1d0f044b3657"} Jan 24 03:56:35 crc kubenswrapper[4772]: I0124 03:56:35.525024 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" podStartSLOduration=2.00644699 podStartE2EDuration="2.524997793s" podCreationTimestamp="2026-01-24 03:56:33 +0000 UTC" firstStartedPulling="2026-01-24 03:56:34.610071222 +0000 UTC m=+891.647161947" lastFinishedPulling="2026-01-24 03:56:35.128622025 +0000 UTC m=+892.165712750" observedRunningTime="2026-01-24 03:56:35.518597461 +0000 UTC m=+892.555688196" watchObservedRunningTime="2026-01-24 03:56:35.524997793 +0000 UTC m=+892.562088518" Jan 24 03:56:35 crc kubenswrapper[4772]: I0124 03:56:35.665814 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821f2a99-5a55-4043-9b0a-ec9197151c46" path="/var/lib/kubelet/pods/821f2a99-5a55-4043-9b0a-ec9197151c46/volumes" Jan 24 03:56:36 crc kubenswrapper[4772]: I0124 03:56:36.770207 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:36 crc kubenswrapper[4772]: I0124 03:56:36.770790 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbcmx" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="registry-server" containerID="cri-o://739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8" gracePeriod=2 Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.268059 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.371423 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities\") pod \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.371499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content\") pod \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.371551 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqsd4\" (UniqueName: \"kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4\") pod \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\" (UID: \"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469\") " Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.372791 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities" (OuterVolumeSpecName: "utilities") pod "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" (UID: "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.386098 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4" (OuterVolumeSpecName: "kube-api-access-xqsd4") pod "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" (UID: "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469"). InnerVolumeSpecName "kube-api-access-xqsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.421749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" (UID: "05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.474264 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.474309 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.474327 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqsd4\" (UniqueName: \"kubernetes.io/projected/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469-kube-api-access-xqsd4\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.513731 4772 generic.go:334] "Generic (PLEG): container finished" podID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerID="739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8" exitCode=0 Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.513861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerDied","Data":"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8"} Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.513894 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbcmx" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.513933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbcmx" event={"ID":"05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469","Type":"ContainerDied","Data":"9328484aa26d2ca0567ca887b72239a878436167872ab645c1c370c18ea00d93"} Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.513955 4772 scope.go:117] "RemoveContainer" containerID="739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.547133 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.547994 4772 scope.go:117] "RemoveContainer" containerID="16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.551652 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbcmx"] Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.571126 4772 scope.go:117] "RemoveContainer" containerID="7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.593224 4772 scope.go:117] "RemoveContainer" containerID="739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8" Jan 24 03:56:37 crc kubenswrapper[4772]: E0124 03:56:37.594067 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8\": container with ID starting with 739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8 not found: ID does not exist" containerID="739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.594118 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8"} err="failed to get container status \"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8\": rpc error: code = NotFound desc = could not find container \"739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8\": container with ID starting with 739202fa5816bea4b3475c2d3cc64b7ef3fc05bc35d6cf59ac82e56daa9ba2f8 not found: ID does not exist" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.594158 4772 scope.go:117] "RemoveContainer" containerID="16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1" Jan 24 03:56:37 crc kubenswrapper[4772]: E0124 03:56:37.595294 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1\": container with ID starting with 16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1 not found: ID does not exist" containerID="16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.595349 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1"} err="failed to get container status \"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1\": rpc error: code = NotFound desc = could not find container \"16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1\": container with ID starting with 16c1f5842b690b55c6c9918bbe31ea18530320fe64f9dfefc6dd47945f5b87a1 not found: ID does not exist" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.595414 4772 scope.go:117] "RemoveContainer" containerID="7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14" Jan 24 03:56:37 crc kubenswrapper[4772]: E0124 03:56:37.596540 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14\": container with ID starting with 7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14 not found: ID does not exist" containerID="7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.596593 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14"} err="failed to get container status \"7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14\": rpc error: code = NotFound desc = could not find container \"7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14\": container with ID starting with 7c80429d938567ae3cc14acb603ea4e4068945677fdc25c99ae2c5f030f68c14 not found: ID does not exist" Jan 24 03:56:37 crc kubenswrapper[4772]: I0124 03:56:37.674058 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" path="/var/lib/kubelet/pods/05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469/volumes" Jan 24 03:56:41 crc kubenswrapper[4772]: I0124 03:56:41.741731 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/memcached-0" Jan 24 03:56:43 crc kubenswrapper[4772]: I0124 03:56:43.532521 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/openstack-galera-2" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="galera" probeResult="failure" output=< Jan 24 03:56:43 crc kubenswrapper[4772]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 24 03:56:43 crc kubenswrapper[4772]: > Jan 24 03:56:43 crc kubenswrapper[4772]: I0124 03:56:43.565520 4772 generic.go:334] "Generic (PLEG): container finished" podID="a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" containerID="1a3f58a1f79febf026825b7ab2c7b9bce436517c22efb037662168685743cfed" exitCode=0 Jan 24 03:56:43 crc kubenswrapper[4772]: I0124 03:56:43.565646 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" event={"ID":"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9","Type":"ContainerDied","Data":"1a3f58a1f79febf026825b7ab2c7b9bce436517c22efb037662168685743cfed"} Jan 24 03:56:44 crc kubenswrapper[4772]: I0124 03:56:44.087853 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:44 crc kubenswrapper[4772]: I0124 03:56:44.088218 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:44 crc kubenswrapper[4772]: I0124 03:56:44.122758 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:44 crc kubenswrapper[4772]: I0124 03:56:44.600485 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 03:56:44 crc kubenswrapper[4772]: I0124 03:56:44.932050 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.015240 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8994\" (UniqueName: \"kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994\") pod \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.015337 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts\") pod \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\" (UID: \"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9\") " Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.016180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" (UID: "a651548f-d95b-43ef-ad1a-6c9c2a67b1d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.021650 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994" (OuterVolumeSpecName: "kube-api-access-l8994") pod "a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" (UID: "a651548f-d95b-43ef-ad1a-6c9c2a67b1d9"). InnerVolumeSpecName "kube-api-access-l8994". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.116706 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8994\" (UniqueName: \"kubernetes.io/projected/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-kube-api-access-l8994\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.116761 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.583141 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" event={"ID":"a651548f-d95b-43ef-ad1a-6c9c2a67b1d9","Type":"ContainerDied","Data":"a2376a12f1735c35a1312aea6d747084e0b9b812e7a07a41f999bb3939d2a702"} Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.583214 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2376a12f1735c35a1312aea6d747084e0b9b812e7a07a41f999bb3939d2a702" Jan 24 03:56:45 crc kubenswrapper[4772]: I0124 03:56:45.583165 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-k8xp2" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.779667 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:56:46 crc kubenswrapper[4772]: E0124 03:56:46.780459 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821f2a99-5a55-4043-9b0a-ec9197151c46" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780478 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="821f2a99-5a55-4043-9b0a-ec9197151c46" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: E0124 03:56:46.780502 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780512 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: E0124 03:56:46.780529 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="extract-utilities" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780544 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="extract-utilities" Jan 24 03:56:46 crc kubenswrapper[4772]: E0124 03:56:46.780567 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" containerName="mariadb-account-create-update" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780578 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" containerName="mariadb-account-create-update" Jan 24 03:56:46 crc kubenswrapper[4772]: E0124 03:56:46.780596 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="extract-content" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780606 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="extract-content" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780829 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" containerName="mariadb-account-create-update" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780851 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b3b846-6b6b-4d91-a0e7-a8d2fdbd7469" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.780878 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="821f2a99-5a55-4043-9b0a-ec9197151c46" containerName="registry-server" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.782213 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.798685 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.839195 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8wch\" (UniqueName: \"kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.839249 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.839283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.940885 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8wch\" (UniqueName: \"kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.940946 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.940991 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.941527 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.941589 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:46 crc kubenswrapper[4772]: I0124 03:56:46.960402 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8wch\" (UniqueName: \"kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch\") pod \"redhat-marketplace-lmfpx\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:47 crc kubenswrapper[4772]: I0124 03:56:47.101685 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:47 crc kubenswrapper[4772]: I0124 03:56:47.561237 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:56:47 crc kubenswrapper[4772]: I0124 03:56:47.598285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerStarted","Data":"b8db0f30f5f09f1e678308edbc5c72755d4b91cbeca482857974efdc4682e083"} Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.284348 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.359471 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.607352 4772 generic.go:334] "Generic (PLEG): container finished" podID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerID="cf7e7de432ead35012a09e7accb415663ef75a7f88d9cd826c3b6cfcf0acb038" exitCode=0 Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.608073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerDied","Data":"cf7e7de432ead35012a09e7accb415663ef75a7f88d9cd826c3b6cfcf0acb038"} Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.821396 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8"] Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.823783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.827436 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzk78" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.829921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8"] Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.871682 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.871864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksdw\" (UniqueName: \"kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.872037 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.973967 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.974132 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.974186 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vksdw\" (UniqueName: \"kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.975847 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.976352 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:48 crc kubenswrapper[4772]: I0124 03:56:48.997855 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vksdw\" (UniqueName: \"kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:49 crc kubenswrapper[4772]: I0124 03:56:49.152074 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:49 crc kubenswrapper[4772]: I0124 03:56:49.614956 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerStarted","Data":"08d99b72e0e9162cedfe6d0863f8dd907972b0aa198b176cb9a03e295eca7b06"} Jan 24 03:56:49 crc kubenswrapper[4772]: I0124 03:56:49.775668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8"] Jan 24 03:56:50 crc kubenswrapper[4772]: I0124 03:56:50.623159 4772 generic.go:334] "Generic (PLEG): container finished" podID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerID="e59ccb7a812ec91fae49516f8b35078b655ca2bab8dd521d6c004931bc3b659a" exitCode=0 Jan 24 03:56:50 crc kubenswrapper[4772]: I0124 03:56:50.623220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" event={"ID":"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7","Type":"ContainerDied","Data":"e59ccb7a812ec91fae49516f8b35078b655ca2bab8dd521d6c004931bc3b659a"} Jan 24 03:56:50 crc kubenswrapper[4772]: I0124 03:56:50.623602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" event={"ID":"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7","Type":"ContainerStarted","Data":"c554f6732f9a1a50c00487b4be032ec66445abe9608608ce697589c123ac9645"} Jan 24 03:56:50 crc kubenswrapper[4772]: I0124 03:56:50.626393 4772 generic.go:334] "Generic (PLEG): container finished" podID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerID="08d99b72e0e9162cedfe6d0863f8dd907972b0aa198b176cb9a03e295eca7b06" exitCode=0 Jan 24 03:56:50 crc kubenswrapper[4772]: I0124 03:56:50.626501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerDied","Data":"08d99b72e0e9162cedfe6d0863f8dd907972b0aa198b176cb9a03e295eca7b06"} Jan 24 03:56:51 crc kubenswrapper[4772]: I0124 03:56:51.636042 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerStarted","Data":"5ec51fdeb8b5c3986aae3f5689d4d7b18afa159bc7c21a9ce2a5daf26d533b1d"} Jan 24 03:56:51 crc kubenswrapper[4772]: I0124 03:56:51.663047 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmfpx" podStartSLOduration=3.261367913 podStartE2EDuration="5.663030142s" podCreationTimestamp="2026-01-24 03:56:46 +0000 UTC" firstStartedPulling="2026-01-24 03:56:48.609627661 +0000 UTC m=+905.646718396" lastFinishedPulling="2026-01-24 03:56:51.01128989 +0000 UTC m=+908.048380625" observedRunningTime="2026-01-24 03:56:51.657476334 +0000 UTC m=+908.694567059" watchObservedRunningTime="2026-01-24 03:56:51.663030142 +0000 UTC m=+908.700120867" Jan 24 03:56:52 crc kubenswrapper[4772]: I0124 03:56:52.644950 4772 generic.go:334] "Generic (PLEG): container finished" podID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerID="ddd731f930a2b93869cb24ef6f583c580cde489f5243c8c8949edd6d85cde6da" exitCode=0 Jan 24 03:56:52 crc kubenswrapper[4772]: I0124 03:56:52.645021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" event={"ID":"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7","Type":"ContainerDied","Data":"ddd731f930a2b93869cb24ef6f583c580cde489f5243c8c8949edd6d85cde6da"} Jan 24 03:56:52 crc kubenswrapper[4772]: I0124 03:56:52.974386 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:56:52 crc kubenswrapper[4772]: I0124 03:56:52.976047 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:52 crc kubenswrapper[4772]: I0124 03:56:52.990255 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.036860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rmd\" (UniqueName: \"kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.036972 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.037105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.138288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.138349 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rmd\" (UniqueName: \"kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.138373 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.138854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.139135 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.177198 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rmd\" (UniqueName: \"kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd\") pod \"redhat-operators-r2n7p\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.317104 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.653462 4772 generic.go:334] "Generic (PLEG): container finished" podID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerID="a068e3b8329a5e9203fac1e2ec28103747deb4ae4cc17c1dcbbc99d0c321caad" exitCode=0 Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.653504 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" event={"ID":"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7","Type":"ContainerDied","Data":"a068e3b8329a5e9203fac1e2ec28103747deb4ae4cc17c1dcbbc99d0c321caad"} Jan 24 03:56:53 crc kubenswrapper[4772]: I0124 03:56:53.815964 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:56:53 crc kubenswrapper[4772]: W0124 03:56:53.823066 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06103dea_8d43_4e8e_ae26_01e57b8d3131.slice/crio-b1ed499a8eaea3bcf96ebef1a1b67bf9003457ac031807ce4bce30e48e2bd515 WatchSource:0}: Error finding container b1ed499a8eaea3bcf96ebef1a1b67bf9003457ac031807ce4bce30e48e2bd515: Status 404 returned error can't find the container with id b1ed499a8eaea3bcf96ebef1a1b67bf9003457ac031807ce4bce30e48e2bd515 Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.006082 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.101814 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.660967 4772 generic.go:334] "Generic (PLEG): container finished" podID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerID="8b08ba36a8807772cb4e3b438fcdf8b73e00d3e6452630c9e0c3886dfbb0ff34" exitCode=0 Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.661034 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerDied","Data":"8b08ba36a8807772cb4e3b438fcdf8b73e00d3e6452630c9e0c3886dfbb0ff34"} Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.661069 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerStarted","Data":"b1ed499a8eaea3bcf96ebef1a1b67bf9003457ac031807ce4bce30e48e2bd515"} Jan 24 03:56:54 crc kubenswrapper[4772]: I0124 03:56:54.957197 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.064164 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vksdw\" (UniqueName: \"kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw\") pod \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.064436 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util\") pod \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.064594 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle\") pod \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\" (UID: \"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7\") " Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.076282 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle" (OuterVolumeSpecName: "bundle") pod "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" (UID: "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.079664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw" (OuterVolumeSpecName: "kube-api-access-vksdw") pod "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" (UID: "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7"). InnerVolumeSpecName "kube-api-access-vksdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.101325 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util" (OuterVolumeSpecName: "util") pod "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" (UID: "5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.166077 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vksdw\" (UniqueName: \"kubernetes.io/projected/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-kube-api-access-vksdw\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.166108 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.166119 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.669012 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerStarted","Data":"999154c2a961190bb61791bae131b94b4f355d15fc5f715707e3e1e6cd8582b9"} Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.671973 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" event={"ID":"5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7","Type":"ContainerDied","Data":"c554f6732f9a1a50c00487b4be032ec66445abe9608608ce697589c123ac9645"} Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.672005 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c554f6732f9a1a50c00487b4be032ec66445abe9608608ce697589c123ac9645" Jan 24 03:56:55 crc kubenswrapper[4772]: I0124 03:56:55.672029 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8" Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.102223 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.102750 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.186846 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.687634 4772 generic.go:334] "Generic (PLEG): container finished" podID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerID="999154c2a961190bb61791bae131b94b4f355d15fc5f715707e3e1e6cd8582b9" exitCode=0 Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.687733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerDied","Data":"999154c2a961190bb61791bae131b94b4f355d15fc5f715707e3e1e6cd8582b9"} Jan 24 03:56:57 crc kubenswrapper[4772]: I0124 03:56:57.741268 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:56:58 crc kubenswrapper[4772]: I0124 03:56:58.698148 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerStarted","Data":"733ffbc7dc16f7a6f613cfc7cf67fb4a3c2e137316cd0aeb46ac86eb7a3f224e"} Jan 24 03:56:58 crc kubenswrapper[4772]: I0124 03:56:58.730023 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r2n7p" podStartSLOduration=3.004898787 podStartE2EDuration="6.729995698s" podCreationTimestamp="2026-01-24 03:56:52 +0000 UTC" firstStartedPulling="2026-01-24 03:56:54.662421063 +0000 UTC m=+911.699511788" lastFinishedPulling="2026-01-24 03:56:58.387517974 +0000 UTC m=+915.424608699" observedRunningTime="2026-01-24 03:56:58.723534863 +0000 UTC m=+915.760625598" watchObservedRunningTime="2026-01-24 03:56:58.729995698 +0000 UTC m=+915.767086433" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.318232 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.318758 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.382226 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 03:57:03 crc kubenswrapper[4772]: E0124 03:57:03.382616 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="util" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.382642 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="util" Jan 24 03:57:03 crc kubenswrapper[4772]: E0124 03:57:03.382659 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="pull" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.382672 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="pull" Jan 24 03:57:03 crc kubenswrapper[4772]: E0124 03:57:03.382684 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="extract" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.382693 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="extract" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.382856 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" containerName="extract" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.383445 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.384872 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-9vk62" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.422551 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.588564 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7tp\" (UniqueName: \"kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp\") pod \"rabbitmq-cluster-operator-779fc9694b-dflxx\" (UID: \"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.690103 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d7tp\" (UniqueName: \"kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp\") pod \"rabbitmq-cluster-operator-779fc9694b-dflxx\" (UID: \"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.725631 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d7tp\" (UniqueName: \"kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp\") pod \"rabbitmq-cluster-operator-779fc9694b-dflxx\" (UID: \"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 03:57:03 crc kubenswrapper[4772]: I0124 03:57:03.762221 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 03:57:04 crc kubenswrapper[4772]: I0124 03:57:04.263372 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 03:57:04 crc kubenswrapper[4772]: I0124 03:57:04.390800 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2n7p" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="registry-server" probeResult="failure" output=< Jan 24 03:57:04 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 03:57:04 crc kubenswrapper[4772]: > Jan 24 03:57:04 crc kubenswrapper[4772]: I0124 03:57:04.733078 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" event={"ID":"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3","Type":"ContainerStarted","Data":"4898a43cdafceafd1b4eed4330f50fed913152326c452bd156977dfcbe4d7789"} Jan 24 03:57:05 crc kubenswrapper[4772]: I0124 03:57:05.366089 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:57:05 crc kubenswrapper[4772]: I0124 03:57:05.366372 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lmfpx" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="registry-server" containerID="cri-o://5ec51fdeb8b5c3986aae3f5689d4d7b18afa159bc7c21a9ce2a5daf26d533b1d" gracePeriod=2 Jan 24 03:57:05 crc kubenswrapper[4772]: I0124 03:57:05.741816 4772 generic.go:334] "Generic (PLEG): container finished" podID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerID="5ec51fdeb8b5c3986aae3f5689d4d7b18afa159bc7c21a9ce2a5daf26d533b1d" exitCode=0 Jan 24 03:57:05 crc kubenswrapper[4772]: I0124 03:57:05.741894 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerDied","Data":"5ec51fdeb8b5c3986aae3f5689d4d7b18afa159bc7c21a9ce2a5daf26d533b1d"} Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.310032 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.345423 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content\") pod \"796df85d-c751-445f-afa8-f5ee4d1e7357\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.345586 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8wch\" (UniqueName: \"kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch\") pod \"796df85d-c751-445f-afa8-f5ee4d1e7357\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.345646 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities\") pod \"796df85d-c751-445f-afa8-f5ee4d1e7357\" (UID: \"796df85d-c751-445f-afa8-f5ee4d1e7357\") " Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.347037 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities" (OuterVolumeSpecName: "utilities") pod "796df85d-c751-445f-afa8-f5ee4d1e7357" (UID: "796df85d-c751-445f-afa8-f5ee4d1e7357"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.352220 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch" (OuterVolumeSpecName: "kube-api-access-n8wch") pod "796df85d-c751-445f-afa8-f5ee4d1e7357" (UID: "796df85d-c751-445f-afa8-f5ee4d1e7357"). InnerVolumeSpecName "kube-api-access-n8wch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.376370 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "796df85d-c751-445f-afa8-f5ee4d1e7357" (UID: "796df85d-c751-445f-afa8-f5ee4d1e7357"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.447389 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.447426 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796df85d-c751-445f-afa8-f5ee4d1e7357-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.447441 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8wch\" (UniqueName: \"kubernetes.io/projected/796df85d-c751-445f-afa8-f5ee4d1e7357-kube-api-access-n8wch\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.768159 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmfpx" event={"ID":"796df85d-c751-445f-afa8-f5ee4d1e7357","Type":"ContainerDied","Data":"b8db0f30f5f09f1e678308edbc5c72755d4b91cbeca482857974efdc4682e083"} Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.768231 4772 scope.go:117] "RemoveContainer" containerID="5ec51fdeb8b5c3986aae3f5689d4d7b18afa159bc7c21a9ce2a5daf26d533b1d" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.768452 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmfpx" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.802997 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.807453 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmfpx"] Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.807506 4772 scope.go:117] "RemoveContainer" containerID="08d99b72e0e9162cedfe6d0863f8dd907972b0aa198b176cb9a03e295eca7b06" Jan 24 03:57:06 crc kubenswrapper[4772]: I0124 03:57:06.846067 4772 scope.go:117] "RemoveContainer" containerID="cf7e7de432ead35012a09e7accb415663ef75a7f88d9cd826c3b6cfcf0acb038" Jan 24 03:57:07 crc kubenswrapper[4772]: I0124 03:57:07.666130 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" path="/var/lib/kubelet/pods/796df85d-c751-445f-afa8-f5ee4d1e7357/volumes" Jan 24 03:57:10 crc kubenswrapper[4772]: I0124 03:57:10.807232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" event={"ID":"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3","Type":"ContainerStarted","Data":"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a"} Jan 24 03:57:10 crc kubenswrapper[4772]: I0124 03:57:10.831006 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" podStartSLOduration=2.298616131 podStartE2EDuration="7.830985049s" podCreationTimestamp="2026-01-24 03:57:03 +0000 UTC" firstStartedPulling="2026-01-24 03:57:04.284687057 +0000 UTC m=+921.321777782" lastFinishedPulling="2026-01-24 03:57:09.817055945 +0000 UTC m=+926.854146700" observedRunningTime="2026-01-24 03:57:10.825063309 +0000 UTC m=+927.862154064" watchObservedRunningTime="2026-01-24 03:57:10.830985049 +0000 UTC m=+927.868075774" Jan 24 03:57:13 crc kubenswrapper[4772]: I0124 03:57:13.368177 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:13 crc kubenswrapper[4772]: I0124 03:57:13.435102 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.449429 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 03:57:14 crc kubenswrapper[4772]: E0124 03:57:14.449750 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="extract-content" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.449768 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="extract-content" Jan 24 03:57:14 crc kubenswrapper[4772]: E0124 03:57:14.449781 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="registry-server" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.449788 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="registry-server" Jan 24 03:57:14 crc kubenswrapper[4772]: E0124 03:57:14.449812 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="extract-utilities" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.449823 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="extract-utilities" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.449975 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="796df85d-c751-445f-afa8-f5ee4d1e7357" containerName="registry-server" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.450765 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.455223 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-default-user" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.456458 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.456474 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"rabbitmq-plugins-conf" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.456600 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"rabbitmq-server-conf" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.464036 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-server-dockercfg-4szgh" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.479629 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571532 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571612 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zfw\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571646 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571663 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571719 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.571750 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672779 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672825 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672910 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672938 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86zfw\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.672980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.674224 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.674606 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.675259 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.677978 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.678252 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/425952cddec545cd3d41a3f3810548581486e690f357bd70ad9f8316daf20fb8/globalmount\"" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.680700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.680723 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.682285 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.693871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86zfw\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.706089 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") pod \"rabbitmq-server-0\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:14 crc kubenswrapper[4772]: I0124 03:57:14.795278 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:15 crc kubenswrapper[4772]: I0124 03:57:15.064107 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 03:57:15 crc kubenswrapper[4772]: I0124 03:57:15.861179 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerStarted","Data":"b9b0dabadf4474efc42404c18fa491c73d9d225f899c98f4c3f4098744c0b902"} Jan 24 03:57:16 crc kubenswrapper[4772]: I0124 03:57:16.992204 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 03:57:16 crc kubenswrapper[4772]: I0124 03:57:16.994176 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:16 crc kubenswrapper[4772]: I0124 03:57:16.997290 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-z7tqx" Jan 24 03:57:17 crc kubenswrapper[4772]: I0124 03:57:17.002655 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 03:57:17 crc kubenswrapper[4772]: I0124 03:57:17.109938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8cg\" (UniqueName: \"kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg\") pod \"keystone-operator-index-vpgcd\" (UID: \"73bc974d-1516-41a1-afc8-588127374117\") " pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:17 crc kubenswrapper[4772]: I0124 03:57:17.211365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp8cg\" (UniqueName: \"kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg\") pod \"keystone-operator-index-vpgcd\" (UID: \"73bc974d-1516-41a1-afc8-588127374117\") " pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:17 crc kubenswrapper[4772]: I0124 03:57:17.234354 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp8cg\" (UniqueName: \"kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg\") pod \"keystone-operator-index-vpgcd\" (UID: \"73bc974d-1516-41a1-afc8-588127374117\") " pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:17 crc kubenswrapper[4772]: I0124 03:57:17.329530 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:18 crc kubenswrapper[4772]: I0124 03:57:18.169680 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:57:18 crc kubenswrapper[4772]: I0124 03:57:18.170293 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r2n7p" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="registry-server" containerID="cri-o://733ffbc7dc16f7a6f613cfc7cf67fb4a3c2e137316cd0aeb46ac86eb7a3f224e" gracePeriod=2 Jan 24 03:57:18 crc kubenswrapper[4772]: I0124 03:57:18.887562 4772 generic.go:334] "Generic (PLEG): container finished" podID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerID="733ffbc7dc16f7a6f613cfc7cf67fb4a3c2e137316cd0aeb46ac86eb7a3f224e" exitCode=0 Jan 24 03:57:18 crc kubenswrapper[4772]: I0124 03:57:18.887734 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerDied","Data":"733ffbc7dc16f7a6f613cfc7cf67fb4a3c2e137316cd0aeb46ac86eb7a3f224e"} Jan 24 03:57:19 crc kubenswrapper[4772]: I0124 03:57:19.969214 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.064260 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content\") pod \"06103dea-8d43-4e8e-ae26-01e57b8d3131\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.064324 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8rmd\" (UniqueName: \"kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd\") pod \"06103dea-8d43-4e8e-ae26-01e57b8d3131\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.064351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities\") pod \"06103dea-8d43-4e8e-ae26-01e57b8d3131\" (UID: \"06103dea-8d43-4e8e-ae26-01e57b8d3131\") " Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.065768 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities" (OuterVolumeSpecName: "utilities") pod "06103dea-8d43-4e8e-ae26-01e57b8d3131" (UID: "06103dea-8d43-4e8e-ae26-01e57b8d3131"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.075144 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd" (OuterVolumeSpecName: "kube-api-access-w8rmd") pod "06103dea-8d43-4e8e-ae26-01e57b8d3131" (UID: "06103dea-8d43-4e8e-ae26-01e57b8d3131"). InnerVolumeSpecName "kube-api-access-w8rmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.166465 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8rmd\" (UniqueName: \"kubernetes.io/projected/06103dea-8d43-4e8e-ae26-01e57b8d3131-kube-api-access-w8rmd\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.166511 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.186135 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06103dea-8d43-4e8e-ae26-01e57b8d3131" (UID: "06103dea-8d43-4e8e-ae26-01e57b8d3131"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.268380 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06103dea-8d43-4e8e-ae26-01e57b8d3131-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.910623 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2n7p" event={"ID":"06103dea-8d43-4e8e-ae26-01e57b8d3131","Type":"ContainerDied","Data":"b1ed499a8eaea3bcf96ebef1a1b67bf9003457ac031807ce4bce30e48e2bd515"} Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.910709 4772 scope.go:117] "RemoveContainer" containerID="733ffbc7dc16f7a6f613cfc7cf67fb4a3c2e137316cd0aeb46ac86eb7a3f224e" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.910927 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2n7p" Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.957939 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:57:20 crc kubenswrapper[4772]: I0124 03:57:20.963709 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r2n7p"] Jan 24 03:57:21 crc kubenswrapper[4772]: I0124 03:57:21.256513 4772 scope.go:117] "RemoveContainer" containerID="999154c2a961190bb61791bae131b94b4f355d15fc5f715707e3e1e6cd8582b9" Jan 24 03:57:21 crc kubenswrapper[4772]: I0124 03:57:21.496208 4772 scope.go:117] "RemoveContainer" containerID="8b08ba36a8807772cb4e3b438fcdf8b73e00d3e6452630c9e0c3886dfbb0ff34" Jan 24 03:57:21 crc kubenswrapper[4772]: I0124 03:57:21.666594 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" path="/var/lib/kubelet/pods/06103dea-8d43-4e8e-ae26-01e57b8d3131/volumes" Jan 24 03:57:21 crc kubenswrapper[4772]: I0124 03:57:21.763895 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 03:57:22 crc kubenswrapper[4772]: W0124 03:57:22.233965 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73bc974d_1516_41a1_afc8_588127374117.slice/crio-04349d909a0c75429eb42fc2dcdb9edb0c4de247b0f50b8ecca4635ae61bc7fb WatchSource:0}: Error finding container 04349d909a0c75429eb42fc2dcdb9edb0c4de247b0f50b8ecca4635ae61bc7fb: Status 404 returned error can't find the container with id 04349d909a0c75429eb42fc2dcdb9edb0c4de247b0f50b8ecca4635ae61bc7fb Jan 24 03:57:22 crc kubenswrapper[4772]: I0124 03:57:22.933095 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vpgcd" event={"ID":"73bc974d-1516-41a1-afc8-588127374117","Type":"ContainerStarted","Data":"04349d909a0c75429eb42fc2dcdb9edb0c4de247b0f50b8ecca4635ae61bc7fb"} Jan 24 03:57:23 crc kubenswrapper[4772]: I0124 03:57:23.940928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerStarted","Data":"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb"} Jan 24 03:57:23 crc kubenswrapper[4772]: I0124 03:57:23.943125 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vpgcd" event={"ID":"73bc974d-1516-41a1-afc8-588127374117","Type":"ContainerStarted","Data":"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804"} Jan 24 03:57:23 crc kubenswrapper[4772]: I0124 03:57:23.984704 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-vpgcd" podStartSLOduration=7.066318427 podStartE2EDuration="7.984679284s" podCreationTimestamp="2026-01-24 03:57:16 +0000 UTC" firstStartedPulling="2026-01-24 03:57:22.272674627 +0000 UTC m=+939.309765352" lastFinishedPulling="2026-01-24 03:57:23.191035484 +0000 UTC m=+940.228126209" observedRunningTime="2026-01-24 03:57:23.984278883 +0000 UTC m=+941.021369608" watchObservedRunningTime="2026-01-24 03:57:23.984679284 +0000 UTC m=+941.021770009" Jan 24 03:57:27 crc kubenswrapper[4772]: I0124 03:57:27.331253 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:27 crc kubenswrapper[4772]: I0124 03:57:27.331607 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:27 crc kubenswrapper[4772]: I0124 03:57:27.363370 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:37 crc kubenswrapper[4772]: I0124 03:57:37.388047 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.866393 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n"] Jan 24 03:57:48 crc kubenswrapper[4772]: E0124 03:57:48.867866 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="extract-utilities" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.867899 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="extract-utilities" Jan 24 03:57:48 crc kubenswrapper[4772]: E0124 03:57:48.867947 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="registry-server" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.867961 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="registry-server" Jan 24 03:57:48 crc kubenswrapper[4772]: E0124 03:57:48.868002 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="extract-content" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.868020 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="extract-content" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.868278 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="06103dea-8d43-4e8e-ae26-01e57b8d3131" containerName="registry-server" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.869657 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.872396 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzk78" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.880710 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n"] Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.915731 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.915806 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fnp\" (UniqueName: \"kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:48 crc kubenswrapper[4772]: I0124 03:57:48.915879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.016933 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.016993 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fnp\" (UniqueName: \"kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.017516 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.017773 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.018035 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.045579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fnp\" (UniqueName: \"kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.197427 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:49 crc kubenswrapper[4772]: I0124 03:57:49.649872 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n"] Jan 24 03:57:50 crc kubenswrapper[4772]: I0124 03:57:50.424808 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" event={"ID":"39506910-41ee-49b0-99f6-f11c28930385","Type":"ContainerStarted","Data":"fd348ad694e216e780a34feccb3bab22dcc86b4525f6ec42ceda1031699a4635"} Jan 24 03:57:51 crc kubenswrapper[4772]: I0124 03:57:51.435723 4772 generic.go:334] "Generic (PLEG): container finished" podID="39506910-41ee-49b0-99f6-f11c28930385" containerID="97e7af0331bfc3d9b793e20539f3caae6e638a88abfd8d69b891348f996c31c7" exitCode=0 Jan 24 03:57:51 crc kubenswrapper[4772]: I0124 03:57:51.435826 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" event={"ID":"39506910-41ee-49b0-99f6-f11c28930385","Type":"ContainerDied","Data":"97e7af0331bfc3d9b793e20539f3caae6e638a88abfd8d69b891348f996c31c7"} Jan 24 03:57:53 crc kubenswrapper[4772]: I0124 03:57:53.454134 4772 generic.go:334] "Generic (PLEG): container finished" podID="39506910-41ee-49b0-99f6-f11c28930385" containerID="6dbe08d1fdf95f619b6d5eb3bbdf49ca812b94f9fbf4cd383235bdd69c47d8f0" exitCode=0 Jan 24 03:57:53 crc kubenswrapper[4772]: I0124 03:57:53.454379 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" event={"ID":"39506910-41ee-49b0-99f6-f11c28930385","Type":"ContainerDied","Data":"6dbe08d1fdf95f619b6d5eb3bbdf49ca812b94f9fbf4cd383235bdd69c47d8f0"} Jan 24 03:57:54 crc kubenswrapper[4772]: I0124 03:57:54.464969 4772 generic.go:334] "Generic (PLEG): container finished" podID="39506910-41ee-49b0-99f6-f11c28930385" containerID="2d2f210905fe34861e38aedb8735c3f3cdbae413d570911efd29331064b21ca7" exitCode=0 Jan 24 03:57:54 crc kubenswrapper[4772]: I0124 03:57:54.465054 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" event={"ID":"39506910-41ee-49b0-99f6-f11c28930385","Type":"ContainerDied","Data":"2d2f210905fe34861e38aedb8735c3f3cdbae413d570911efd29331064b21ca7"} Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.474233 4772 generic.go:334] "Generic (PLEG): container finished" podID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerID="e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb" exitCode=0 Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.474322 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerDied","Data":"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb"} Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.806369 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.935500 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5fnp\" (UniqueName: \"kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp\") pod \"39506910-41ee-49b0-99f6-f11c28930385\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.935608 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util\") pod \"39506910-41ee-49b0-99f6-f11c28930385\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.935649 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle\") pod \"39506910-41ee-49b0-99f6-f11c28930385\" (UID: \"39506910-41ee-49b0-99f6-f11c28930385\") " Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.937882 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle" (OuterVolumeSpecName: "bundle") pod "39506910-41ee-49b0-99f6-f11c28930385" (UID: "39506910-41ee-49b0-99f6-f11c28930385"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.941134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp" (OuterVolumeSpecName: "kube-api-access-j5fnp") pod "39506910-41ee-49b0-99f6-f11c28930385" (UID: "39506910-41ee-49b0-99f6-f11c28930385"). InnerVolumeSpecName "kube-api-access-j5fnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:57:55 crc kubenswrapper[4772]: I0124 03:57:55.950873 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util" (OuterVolumeSpecName: "util") pod "39506910-41ee-49b0-99f6-f11c28930385" (UID: "39506910-41ee-49b0-99f6-f11c28930385"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.037861 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5fnp\" (UniqueName: \"kubernetes.io/projected/39506910-41ee-49b0-99f6-f11c28930385-kube-api-access-j5fnp\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.037901 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.037911 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39506910-41ee-49b0-99f6-f11c28930385-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.484232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" event={"ID":"39506910-41ee-49b0-99f6-f11c28930385","Type":"ContainerDied","Data":"fd348ad694e216e780a34feccb3bab22dcc86b4525f6ec42ceda1031699a4635"} Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.484322 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd348ad694e216e780a34feccb3bab22dcc86b4525f6ec42ceda1031699a4635" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.484429 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.496070 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerStarted","Data":"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2"} Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.496494 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:57:56 crc kubenswrapper[4772]: I0124 03:57:56.518137 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.30665714 podStartE2EDuration="43.518122952s" podCreationTimestamp="2026-01-24 03:57:13 +0000 UTC" firstStartedPulling="2026-01-24 03:57:15.091869803 +0000 UTC m=+932.128960528" lastFinishedPulling="2026-01-24 03:57:22.303335625 +0000 UTC m=+939.340426340" observedRunningTime="2026-01-24 03:57:56.515899729 +0000 UTC m=+973.552990494" watchObservedRunningTime="2026-01-24 03:57:56.518122952 +0000 UTC m=+973.555213677" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.131650 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 03:58:05 crc kubenswrapper[4772]: E0124 03:58:05.132488 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="util" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.132506 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="util" Jan 24 03:58:05 crc kubenswrapper[4772]: E0124 03:58:05.132524 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="extract" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.132532 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="extract" Jan 24 03:58:05 crc kubenswrapper[4772]: E0124 03:58:05.132588 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="pull" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.132596 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="pull" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.132768 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="39506910-41ee-49b0-99f6-f11c28930385" containerName="extract" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.133348 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.139767 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jb7ht" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.139982 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.143046 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.172556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.172614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.172669 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6n5\" (UniqueName: \"kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.273602 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.273673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.273731 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6n5\" (UniqueName: \"kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.284724 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.295280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6n5\" (UniqueName: \"kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.309355 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert\") pod \"keystone-operator-controller-manager-76c887549b-6rhr6\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.456767 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:05 crc kubenswrapper[4772]: I0124 03:58:05.863470 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 03:58:06 crc kubenswrapper[4772]: I0124 03:58:06.558849 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" event={"ID":"1782b672-2a10-4ea2-a88a-c586d992d130","Type":"ContainerStarted","Data":"1bd5283e108b01b45883d61f222e2580c0bce66eb4db23506ab4fe2e9d9d93b9"} Jan 24 03:58:09 crc kubenswrapper[4772]: I0124 03:58:09.585852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" event={"ID":"1782b672-2a10-4ea2-a88a-c586d992d130","Type":"ContainerStarted","Data":"a5dcdaa4c3f700c48d725ed851adb9ff87b3b0a4d26ac5c47143f8c3d1d6eeb4"} Jan 24 03:58:09 crc kubenswrapper[4772]: I0124 03:58:09.586867 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:09 crc kubenswrapper[4772]: I0124 03:58:09.613836 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" podStartSLOduration=1.30834382 podStartE2EDuration="4.613816817s" podCreationTimestamp="2026-01-24 03:58:05 +0000 UTC" firstStartedPulling="2026-01-24 03:58:05.869418472 +0000 UTC m=+982.906509197" lastFinishedPulling="2026-01-24 03:58:09.174891469 +0000 UTC m=+986.211982194" observedRunningTime="2026-01-24 03:58:09.612829929 +0000 UTC m=+986.649920654" watchObservedRunningTime="2026-01-24 03:58:09.613816817 +0000 UTC m=+986.650907542" Jan 24 03:58:14 crc kubenswrapper[4772]: I0124 03:58:14.802017 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 03:58:15 crc kubenswrapper[4772]: I0124 03:58:15.463087 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 03:58:16 crc kubenswrapper[4772]: I0124 03:58:16.900419 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:58:16 crc kubenswrapper[4772]: I0124 03:58:16.900854 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.709224 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-db-create-k6rrk"] Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.710231 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.724635 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq"] Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.725978 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.728395 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-db-secret" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.733873 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-k6rrk"] Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.764955 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq"] Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.799460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.799540 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x4c\" (UniqueName: \"kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.800065 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.800207 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dqr7\" (UniqueName: \"kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.901578 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x4c\" (UniqueName: \"kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.901754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.901795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dqr7\" (UniqueName: \"kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.901846 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.902947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.903067 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.923771 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dqr7\" (UniqueName: \"kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7\") pod \"keystone-db-create-k6rrk\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:19 crc kubenswrapper[4772]: I0124 03:58:19.935355 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x4c\" (UniqueName: \"kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c\") pod \"keystone-253c-account-create-update-ljrsq\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.033045 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.066464 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.392885 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq"] Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.473263 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-k6rrk"] Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.670192 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" event={"ID":"ec53d439-5537-4600-a36e-175873ed7f38","Type":"ContainerStarted","Data":"92966d56542571126fbb411be9dbef7ba1e9ecef51dd799126f265e9b9d7d747"} Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.670265 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" event={"ID":"ec53d439-5537-4600-a36e-175873ed7f38","Type":"ContainerStarted","Data":"02e7e93ac598bfc75ddee4f37a4aada9ae76e47e3f61e4f7ef098a82f6c10ff8"} Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.672133 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" event={"ID":"b913a37a-112f-484d-b135-9bed042886f9","Type":"ContainerStarted","Data":"c1db88fd6d07d58f2995e381d5bc0e65190652ab95d820eb6b828d4536a87c47"} Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.672158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" event={"ID":"b913a37a-112f-484d-b135-9bed042886f9","Type":"ContainerStarted","Data":"a30b962b826cc820c422602d4611aee175a73fb430fec8c0497cd43828deb5be"} Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.685876 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" podStartSLOduration=1.685854692 podStartE2EDuration="1.685854692s" podCreationTimestamp="2026-01-24 03:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:58:20.685365408 +0000 UTC m=+997.722456133" watchObservedRunningTime="2026-01-24 03:58:20.685854692 +0000 UTC m=+997.722945427" Jan 24 03:58:20 crc kubenswrapper[4772]: I0124 03:58:20.705077 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" podStartSLOduration=1.705056103 podStartE2EDuration="1.705056103s" podCreationTimestamp="2026-01-24 03:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:58:20.699988897 +0000 UTC m=+997.737079622" watchObservedRunningTime="2026-01-24 03:58:20.705056103 +0000 UTC m=+997.742146828" Jan 24 03:58:21 crc kubenswrapper[4772]: I0124 03:58:21.680774 4772 generic.go:334] "Generic (PLEG): container finished" podID="ec53d439-5537-4600-a36e-175873ed7f38" containerID="92966d56542571126fbb411be9dbef7ba1e9ecef51dd799126f265e9b9d7d747" exitCode=0 Jan 24 03:58:21 crc kubenswrapper[4772]: I0124 03:58:21.680881 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" event={"ID":"ec53d439-5537-4600-a36e-175873ed7f38","Type":"ContainerDied","Data":"92966d56542571126fbb411be9dbef7ba1e9ecef51dd799126f265e9b9d7d747"} Jan 24 03:58:21 crc kubenswrapper[4772]: I0124 03:58:21.685461 4772 generic.go:334] "Generic (PLEG): container finished" podID="b913a37a-112f-484d-b135-9bed042886f9" containerID="c1db88fd6d07d58f2995e381d5bc0e65190652ab95d820eb6b828d4536a87c47" exitCode=0 Jan 24 03:58:21 crc kubenswrapper[4772]: I0124 03:58:21.685497 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" event={"ID":"b913a37a-112f-484d-b135-9bed042886f9","Type":"ContainerDied","Data":"c1db88fd6d07d58f2995e381d5bc0e65190652ab95d820eb6b828d4536a87c47"} Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.578401 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.579451 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.581524 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-xzcq7" Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.595976 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.649526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67pt\" (UniqueName: \"kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt\") pod \"horizon-operator-index-j6cwp\" (UID: \"771cb13d-6b19-45a2-b23d-68156056b344\") " pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.750778 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67pt\" (UniqueName: \"kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt\") pod \"horizon-operator-index-j6cwp\" (UID: \"771cb13d-6b19-45a2-b23d-68156056b344\") " pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.777174 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67pt\" (UniqueName: \"kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt\") pod \"horizon-operator-index-j6cwp\" (UID: \"771cb13d-6b19-45a2-b23d-68156056b344\") " pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:22 crc kubenswrapper[4772]: I0124 03:58:22.954386 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.057862 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.066131 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.154115 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts\") pod \"b913a37a-112f-484d-b135-9bed042886f9\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.154168 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts\") pod \"ec53d439-5537-4600-a36e-175873ed7f38\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.154190 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dqr7\" (UniqueName: \"kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7\") pod \"b913a37a-112f-484d-b135-9bed042886f9\" (UID: \"b913a37a-112f-484d-b135-9bed042886f9\") " Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.154261 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5x4c\" (UniqueName: \"kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c\") pod \"ec53d439-5537-4600-a36e-175873ed7f38\" (UID: \"ec53d439-5537-4600-a36e-175873ed7f38\") " Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.155205 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b913a37a-112f-484d-b135-9bed042886f9" (UID: "b913a37a-112f-484d-b135-9bed042886f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.155343 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec53d439-5537-4600-a36e-175873ed7f38" (UID: "ec53d439-5537-4600-a36e-175873ed7f38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.158637 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c" (OuterVolumeSpecName: "kube-api-access-s5x4c") pod "ec53d439-5537-4600-a36e-175873ed7f38" (UID: "ec53d439-5537-4600-a36e-175873ed7f38"). InnerVolumeSpecName "kube-api-access-s5x4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.161924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7" (OuterVolumeSpecName: "kube-api-access-6dqr7") pod "b913a37a-112f-484d-b135-9bed042886f9" (UID: "b913a37a-112f-484d-b135-9bed042886f9"). InnerVolumeSpecName "kube-api-access-6dqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.255453 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5x4c\" (UniqueName: \"kubernetes.io/projected/ec53d439-5537-4600-a36e-175873ed7f38-kube-api-access-s5x4c\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.255494 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b913a37a-112f-484d-b135-9bed042886f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.255507 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec53d439-5537-4600-a36e-175873ed7f38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.255519 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dqr7\" (UniqueName: \"kubernetes.io/projected/b913a37a-112f-484d-b135-9bed042886f9-kube-api-access-6dqr7\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.418392 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.704652 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.704794 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-k6rrk" event={"ID":"b913a37a-112f-484d-b135-9bed042886f9","Type":"ContainerDied","Data":"a30b962b826cc820c422602d4611aee175a73fb430fec8c0497cd43828deb5be"} Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.704867 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a30b962b826cc820c422602d4611aee175a73fb430fec8c0497cd43828deb5be" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.707781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" event={"ID":"ec53d439-5537-4600-a36e-175873ed7f38","Type":"ContainerDied","Data":"02e7e93ac598bfc75ddee4f37a4aada9ae76e47e3f61e4f7ef098a82f6c10ff8"} Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.707816 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e7e93ac598bfc75ddee4f37a4aada9ae76e47e3f61e4f7ef098a82f6c10ff8" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.707999 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq" Jan 24 03:58:23 crc kubenswrapper[4772]: I0124 03:58:23.709422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-j6cwp" event={"ID":"771cb13d-6b19-45a2-b23d-68156056b344","Type":"ContainerStarted","Data":"c38e7867497b2b6141995a3ff7c555e4c9ea8f36001806b4a59fdaef8e327f1c"} Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.299991 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-dptfs"] Jan 24 03:58:25 crc kubenswrapper[4772]: E0124 03:58:25.302407 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b913a37a-112f-484d-b135-9bed042886f9" containerName="mariadb-database-create" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.302902 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b913a37a-112f-484d-b135-9bed042886f9" containerName="mariadb-database-create" Jan 24 03:58:25 crc kubenswrapper[4772]: E0124 03:58:25.303000 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec53d439-5537-4600-a36e-175873ed7f38" containerName="mariadb-account-create-update" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.303075 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec53d439-5537-4600-a36e-175873ed7f38" containerName="mariadb-account-create-update" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.303342 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec53d439-5537-4600-a36e-175873ed7f38" containerName="mariadb-account-create-update" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.303443 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b913a37a-112f-484d-b135-9bed042886f9" containerName="mariadb-database-create" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.304215 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.307984 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.308257 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-2hrts" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.308391 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.309259 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.311653 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-dptfs"] Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.393933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47dph\" (UniqueName: \"kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.393990 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.495502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.495655 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47dph\" (UniqueName: \"kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.511125 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.518272 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47dph\" (UniqueName: \"kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph\") pod \"keystone-db-sync-dptfs\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:25 crc kubenswrapper[4772]: I0124 03:58:25.621214 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:26 crc kubenswrapper[4772]: I0124 03:58:26.137822 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-dptfs"] Jan 24 03:58:26 crc kubenswrapper[4772]: W0124 03:58:26.146311 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod560ff308_c57e_4dcf_9398_9cc95c8da04a.slice/crio-124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf WatchSource:0}: Error finding container 124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf: Status 404 returned error can't find the container with id 124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf Jan 24 03:58:26 crc kubenswrapper[4772]: I0124 03:58:26.148115 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 03:58:26 crc kubenswrapper[4772]: I0124 03:58:26.747052 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-j6cwp" event={"ID":"771cb13d-6b19-45a2-b23d-68156056b344","Type":"ContainerStarted","Data":"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc"} Jan 24 03:58:26 crc kubenswrapper[4772]: I0124 03:58:26.748849 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" event={"ID":"560ff308-c57e-4dcf-9398-9cc95c8da04a","Type":"ContainerStarted","Data":"124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf"} Jan 24 03:58:26 crc kubenswrapper[4772]: I0124 03:58:26.765587 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-j6cwp" podStartSLOduration=2.479293633 podStartE2EDuration="4.765559625s" podCreationTimestamp="2026-01-24 03:58:22 +0000 UTC" firstStartedPulling="2026-01-24 03:58:23.43120317 +0000 UTC m=+1000.468293935" lastFinishedPulling="2026-01-24 03:58:25.717469202 +0000 UTC m=+1002.754559927" observedRunningTime="2026-01-24 03:58:26.765484353 +0000 UTC m=+1003.802575098" watchObservedRunningTime="2026-01-24 03:58:26.765559625 +0000 UTC m=+1003.802650380" Jan 24 03:58:32 crc kubenswrapper[4772]: I0124 03:58:32.954865 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:32 crc kubenswrapper[4772]: I0124 03:58:32.955423 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:33 crc kubenswrapper[4772]: I0124 03:58:32.999885 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:33 crc kubenswrapper[4772]: I0124 03:58:33.796891 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" event={"ID":"560ff308-c57e-4dcf-9398-9cc95c8da04a","Type":"ContainerStarted","Data":"8cf8b01fb0d8f029213c3444df095a25f2cf2f968c4d427c76d548c3be3d4564"} Jan 24 03:58:33 crc kubenswrapper[4772]: I0124 03:58:33.827188 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" podStartSLOduration=1.69521871 podStartE2EDuration="8.827158274s" podCreationTimestamp="2026-01-24 03:58:25 +0000 UTC" firstStartedPulling="2026-01-24 03:58:26.147835415 +0000 UTC m=+1003.184926150" lastFinishedPulling="2026-01-24 03:58:33.279774989 +0000 UTC m=+1010.316865714" observedRunningTime="2026-01-24 03:58:33.818345242 +0000 UTC m=+1010.855436007" watchObservedRunningTime="2026-01-24 03:58:33.827158274 +0000 UTC m=+1010.864249039" Jan 24 03:58:33 crc kubenswrapper[4772]: I0124 03:58:33.833085 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.811257 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8"] Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.813068 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.815436 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzk78" Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.820517 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8"] Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.926715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.926824 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:35 crc kubenswrapper[4772]: I0124 03:58:35.927085 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbq68\" (UniqueName: \"kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.028864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.029136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.029194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbq68\" (UniqueName: \"kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.029658 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.029672 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.049641 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbq68\" (UniqueName: \"kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68\") pod \"080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.129822 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.714826 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8"] Jan 24 03:58:36 crc kubenswrapper[4772]: I0124 03:58:36.818916 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" event={"ID":"f72e40f3-c149-4466-91fe-d21a07c221d8","Type":"ContainerStarted","Data":"3da8bbb084e09f98da24d174bd3094964c48b9b0ef092c5328489cdc298f350b"} Jan 24 03:58:37 crc kubenswrapper[4772]: I0124 03:58:37.839620 4772 generic.go:334] "Generic (PLEG): container finished" podID="560ff308-c57e-4dcf-9398-9cc95c8da04a" containerID="8cf8b01fb0d8f029213c3444df095a25f2cf2f968c4d427c76d548c3be3d4564" exitCode=0 Jan 24 03:58:37 crc kubenswrapper[4772]: I0124 03:58:37.839727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" event={"ID":"560ff308-c57e-4dcf-9398-9cc95c8da04a","Type":"ContainerDied","Data":"8cf8b01fb0d8f029213c3444df095a25f2cf2f968c4d427c76d548c3be3d4564"} Jan 24 03:58:37 crc kubenswrapper[4772]: I0124 03:58:37.843829 4772 generic.go:334] "Generic (PLEG): container finished" podID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerID="1a377a7b1447aca3f61c5d5633f5115ce7307736fbbefe32e06c778e6a5ad521" exitCode=0 Jan 24 03:58:37 crc kubenswrapper[4772]: I0124 03:58:37.843862 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" event={"ID":"f72e40f3-c149-4466-91fe-d21a07c221d8","Type":"ContainerDied","Data":"1a377a7b1447aca3f61c5d5633f5115ce7307736fbbefe32e06c778e6a5ad521"} Jan 24 03:58:38 crc kubenswrapper[4772]: I0124 03:58:38.855497 4772 generic.go:334] "Generic (PLEG): container finished" podID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerID="807e5df6c31404ad5bba5bd01a9d401b7eac60a61b4b5b2a0043502a1f7a64ba" exitCode=0 Jan 24 03:58:38 crc kubenswrapper[4772]: I0124 03:58:38.855661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" event={"ID":"f72e40f3-c149-4466-91fe-d21a07c221d8","Type":"ContainerDied","Data":"807e5df6c31404ad5bba5bd01a9d401b7eac60a61b4b5b2a0043502a1f7a64ba"} Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.315948 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.481923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data\") pod \"560ff308-c57e-4dcf-9398-9cc95c8da04a\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.482091 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47dph\" (UniqueName: \"kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph\") pod \"560ff308-c57e-4dcf-9398-9cc95c8da04a\" (UID: \"560ff308-c57e-4dcf-9398-9cc95c8da04a\") " Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.487883 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph" (OuterVolumeSpecName: "kube-api-access-47dph") pod "560ff308-c57e-4dcf-9398-9cc95c8da04a" (UID: "560ff308-c57e-4dcf-9398-9cc95c8da04a"). InnerVolumeSpecName "kube-api-access-47dph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.523161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data" (OuterVolumeSpecName: "config-data") pod "560ff308-c57e-4dcf-9398-9cc95c8da04a" (UID: "560ff308-c57e-4dcf-9398-9cc95c8da04a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.583515 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47dph\" (UniqueName: \"kubernetes.io/projected/560ff308-c57e-4dcf-9398-9cc95c8da04a-kube-api-access-47dph\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.583555 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560ff308-c57e-4dcf-9398-9cc95c8da04a-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.865766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" event={"ID":"560ff308-c57e-4dcf-9398-9cc95c8da04a","Type":"ContainerDied","Data":"124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf"} Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.865816 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124a99989b6b9e4a5f4539d86fe4ee8c7e2751d734692f806e00fd45c0463daf" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.866537 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-dptfs" Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.870158 4772 generic.go:334] "Generic (PLEG): container finished" podID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerID="51262daab598d8d99d1ed0b86c92fee3a2564ac0bf43c8568e396e92e166596c" exitCode=0 Jan 24 03:58:39 crc kubenswrapper[4772]: I0124 03:58:39.870204 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" event={"ID":"f72e40f3-c149-4466-91fe-d21a07c221d8","Type":"ContainerDied","Data":"51262daab598d8d99d1ed0b86c92fee3a2564ac0bf43c8568e396e92e166596c"} Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.069705 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-7jv6b"] Jan 24 03:58:40 crc kubenswrapper[4772]: E0124 03:58:40.070062 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560ff308-c57e-4dcf-9398-9cc95c8da04a" containerName="keystone-db-sync" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.070085 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="560ff308-c57e-4dcf-9398-9cc95c8da04a" containerName="keystone-db-sync" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.070365 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="560ff308-c57e-4dcf-9398-9cc95c8da04a" containerName="keystone-db-sync" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.070989 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.075821 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.075936 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.075986 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"osp-secret" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.076015 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.075939 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-2hrts" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.086378 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-7jv6b"] Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.190979 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.191032 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzg5b\" (UniqueName: \"kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.191063 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.191152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.191186 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.292954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.293271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzg5b\" (UniqueName: \"kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.293295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.293331 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.293363 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.297119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.297268 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.299264 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.299963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.309458 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzg5b\" (UniqueName: \"kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b\") pod \"keystone-bootstrap-7jv6b\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.394490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.829241 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-7jv6b"] Jan 24 03:58:40 crc kubenswrapper[4772]: W0124 03:58:40.839334 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a0008c7_d89a_43f7_9469_64d1c8ce54dd.slice/crio-f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0 WatchSource:0}: Error finding container f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0: Status 404 returned error can't find the container with id f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0 Jan 24 03:58:40 crc kubenswrapper[4772]: I0124 03:58:40.878026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" event={"ID":"6a0008c7-d89a-43f7-9469-64d1c8ce54dd","Type":"ContainerStarted","Data":"f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0"} Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.135629 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.205865 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util\") pod \"f72e40f3-c149-4466-91fe-d21a07c221d8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.205937 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle\") pod \"f72e40f3-c149-4466-91fe-d21a07c221d8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.206035 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbq68\" (UniqueName: \"kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68\") pod \"f72e40f3-c149-4466-91fe-d21a07c221d8\" (UID: \"f72e40f3-c149-4466-91fe-d21a07c221d8\") " Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.208864 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle" (OuterVolumeSpecName: "bundle") pod "f72e40f3-c149-4466-91fe-d21a07c221d8" (UID: "f72e40f3-c149-4466-91fe-d21a07c221d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.217491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68" (OuterVolumeSpecName: "kube-api-access-xbq68") pod "f72e40f3-c149-4466-91fe-d21a07c221d8" (UID: "f72e40f3-c149-4466-91fe-d21a07c221d8"). InnerVolumeSpecName "kube-api-access-xbq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.234379 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util" (OuterVolumeSpecName: "util") pod "f72e40f3-c149-4466-91fe-d21a07c221d8" (UID: "f72e40f3-c149-4466-91fe-d21a07c221d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.308347 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbq68\" (UniqueName: \"kubernetes.io/projected/f72e40f3-c149-4466-91fe-d21a07c221d8-kube-api-access-xbq68\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.308812 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-util\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.308834 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f72e40f3-c149-4466-91fe-d21a07c221d8-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.888486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" event={"ID":"f72e40f3-c149-4466-91fe-d21a07c221d8","Type":"ContainerDied","Data":"3da8bbb084e09f98da24d174bd3094964c48b9b0ef092c5328489cdc298f350b"} Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.888528 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da8bbb084e09f98da24d174bd3094964c48b9b0ef092c5328489cdc298f350b" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.888591 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8" Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.895115 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" event={"ID":"6a0008c7-d89a-43f7-9469-64d1c8ce54dd","Type":"ContainerStarted","Data":"30363292f5d3e9746a022fb1f9ce2ad8fd9a21054d682a442db9836887a862d9"} Jan 24 03:58:41 crc kubenswrapper[4772]: I0124 03:58:41.931793 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" podStartSLOduration=1.931778021 podStartE2EDuration="1.931778021s" podCreationTimestamp="2026-01-24 03:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:58:41.927760286 +0000 UTC m=+1018.964851011" watchObservedRunningTime="2026-01-24 03:58:41.931778021 +0000 UTC m=+1018.968868746" Jan 24 03:58:43 crc kubenswrapper[4772]: I0124 03:58:43.911157 4772 generic.go:334] "Generic (PLEG): container finished" podID="6a0008c7-d89a-43f7-9469-64d1c8ce54dd" containerID="30363292f5d3e9746a022fb1f9ce2ad8fd9a21054d682a442db9836887a862d9" exitCode=0 Jan 24 03:58:43 crc kubenswrapper[4772]: I0124 03:58:43.911404 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" event={"ID":"6a0008c7-d89a-43f7-9469-64d1c8ce54dd","Type":"ContainerDied","Data":"30363292f5d3e9746a022fb1f9ce2ad8fd9a21054d682a442db9836887a862d9"} Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.221194 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.370526 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzg5b\" (UniqueName: \"kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b\") pod \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.370703 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data\") pod \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.370795 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts\") pod \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.370955 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys\") pod \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.371040 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys\") pod \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\" (UID: \"6a0008c7-d89a-43f7-9469-64d1c8ce54dd\") " Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.377414 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6a0008c7-d89a-43f7-9469-64d1c8ce54dd" (UID: "6a0008c7-d89a-43f7-9469-64d1c8ce54dd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.377695 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b" (OuterVolumeSpecName: "kube-api-access-kzg5b") pod "6a0008c7-d89a-43f7-9469-64d1c8ce54dd" (UID: "6a0008c7-d89a-43f7-9469-64d1c8ce54dd"). InnerVolumeSpecName "kube-api-access-kzg5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.378073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts" (OuterVolumeSpecName: "scripts") pod "6a0008c7-d89a-43f7-9469-64d1c8ce54dd" (UID: "6a0008c7-d89a-43f7-9469-64d1c8ce54dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.378263 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6a0008c7-d89a-43f7-9469-64d1c8ce54dd" (UID: "6a0008c7-d89a-43f7-9469-64d1c8ce54dd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.396558 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data" (OuterVolumeSpecName: "config-data") pod "6a0008c7-d89a-43f7-9469-64d1c8ce54dd" (UID: "6a0008c7-d89a-43f7-9469-64d1c8ce54dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.472652 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.472693 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.472703 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.472711 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.472722 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzg5b\" (UniqueName: \"kubernetes.io/projected/6a0008c7-d89a-43f7-9469-64d1c8ce54dd-kube-api-access-kzg5b\") on node \"crc\" DevicePath \"\"" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.925004 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" event={"ID":"6a0008c7-d89a-43f7-9469-64d1c8ce54dd","Type":"ContainerDied","Data":"f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0"} Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.925043 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a7c90e78f0697318d217b4df7a80f002a49140a01349370b55ed90c2070aa0" Jan 24 03:58:45 crc kubenswrapper[4772]: I0124 03:58:45.925056 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-7jv6b" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.332545 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 03:58:46 crc kubenswrapper[4772]: E0124 03:58:46.332894 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0008c7-d89a-43f7-9469-64d1c8ce54dd" containerName="keystone-bootstrap" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.332910 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0008c7-d89a-43f7-9469-64d1c8ce54dd" containerName="keystone-bootstrap" Jan 24 03:58:46 crc kubenswrapper[4772]: E0124 03:58:46.332926 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="util" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.332934 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="util" Jan 24 03:58:46 crc kubenswrapper[4772]: E0124 03:58:46.332949 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="pull" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.332956 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="pull" Jan 24 03:58:46 crc kubenswrapper[4772]: E0124 03:58:46.332968 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="extract" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.332975 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="extract" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.333113 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0008c7-d89a-43f7-9469-64d1c8ce54dd" containerName="keystone-bootstrap" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.333135 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" containerName="extract" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.333693 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.338304 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.338592 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-2hrts" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.338617 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.340976 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.342316 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.486446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.486786 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.486901 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6t4\" (UniqueName: \"kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.487145 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.487247 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.588917 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.590099 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.590170 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6t4\" (UniqueName: \"kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.590300 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.590385 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.593140 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.593959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.594341 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.595113 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.606111 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6t4\" (UniqueName: \"kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4\") pod \"keystone-578d7f9b5f-s458x\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.649530 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.899459 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:58:46 crc kubenswrapper[4772]: I0124 03:58:46.899893 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:58:47 crc kubenswrapper[4772]: I0124 03:58:47.157724 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 03:58:47 crc kubenswrapper[4772]: W0124 03:58:47.162542 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e139ca_fa05_4701_b9d4_e4524f011e5d.slice/crio-9fe0f075c62585faad7a4ae1a6303f2076ac9f7edc65da457881668cacf14d5d WatchSource:0}: Error finding container 9fe0f075c62585faad7a4ae1a6303f2076ac9f7edc65da457881668cacf14d5d: Status 404 returned error can't find the container with id 9fe0f075c62585faad7a4ae1a6303f2076ac9f7edc65da457881668cacf14d5d Jan 24 03:58:47 crc kubenswrapper[4772]: I0124 03:58:47.938797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" event={"ID":"15e139ca-fa05-4701-b9d4-e4524f011e5d","Type":"ContainerStarted","Data":"5f1adc023cdb78f59cb5b997da7cd6e791c9ed2b6ee1bbc8e9c823d76db51282"} Jan 24 03:58:47 crc kubenswrapper[4772]: I0124 03:58:47.938860 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" event={"ID":"15e139ca-fa05-4701-b9d4-e4524f011e5d","Type":"ContainerStarted","Data":"9fe0f075c62585faad7a4ae1a6303f2076ac9f7edc65da457881668cacf14d5d"} Jan 24 03:58:47 crc kubenswrapper[4772]: I0124 03:58:47.939004 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:58:47 crc kubenswrapper[4772]: I0124 03:58:47.958265 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" podStartSLOduration=1.958240847 podStartE2EDuration="1.958240847s" podCreationTimestamp="2026-01-24 03:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:58:47.954331065 +0000 UTC m=+1024.991421800" watchObservedRunningTime="2026-01-24 03:58:47.958240847 +0000 UTC m=+1024.995331582" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.111897 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.112852 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.114800 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.115002 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bpvct" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.123227 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.241062 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.241134 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmccv\" (UniqueName: \"kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.241167 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.342512 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.342567 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmccv\" (UniqueName: \"kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.342604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.347876 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.350063 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.360123 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmccv\" (UniqueName: \"kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv\") pod \"horizon-operator-controller-manager-847649c645-ww8rt\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.435307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.940057 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 03:58:50 crc kubenswrapper[4772]: I0124 03:58:50.968200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" event={"ID":"fdc11ac5-1127-4ca6-b518-9476edcdaafb","Type":"ContainerStarted","Data":"4b90d71f7d242317822fd76e11424e21035876150b30ffa6091590293dedff98"} Jan 24 03:58:55 crc kubenswrapper[4772]: I0124 03:58:55.016092 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" event={"ID":"fdc11ac5-1127-4ca6-b518-9476edcdaafb","Type":"ContainerStarted","Data":"733f78ae577f592e537720480ceee881b358d0f4e498e222913a413efb9d743d"} Jan 24 03:58:55 crc kubenswrapper[4772]: I0124 03:58:55.016578 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:59:00 crc kubenswrapper[4772]: I0124 03:59:00.442729 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 03:59:00 crc kubenswrapper[4772]: I0124 03:59:00.470239 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" podStartSLOduration=7.165041745 podStartE2EDuration="10.470207734s" podCreationTimestamp="2026-01-24 03:58:50 +0000 UTC" firstStartedPulling="2026-01-24 03:58:50.963525913 +0000 UTC m=+1028.000616638" lastFinishedPulling="2026-01-24 03:58:54.268691902 +0000 UTC m=+1031.305782627" observedRunningTime="2026-01-24 03:58:55.039320424 +0000 UTC m=+1032.076411169" watchObservedRunningTime="2026-01-24 03:59:00.470207734 +0000 UTC m=+1037.507298499" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.554872 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.556493 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.558048 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-scripts" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.558581 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-config-data" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.559110 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon-horizon-dockercfg-rd9td" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.565427 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.571158 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.652352 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.655638 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.668517 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.677803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.677844 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.677899 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.677935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.678058 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw7z\" (UniqueName: \"kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrw7z\" (UniqueName: \"kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779238 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779260 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7h22\" (UniqueName: \"kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779329 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779373 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779420 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779509 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.779605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.780087 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.780233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.780445 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.789872 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.794942 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrw7z\" (UniqueName: \"kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z\") pod \"horizon-6675bd755-rcnpp\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.874497 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.880563 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h22\" (UniqueName: \"kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.880995 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.881096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.881179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.881278 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.882091 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.882450 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.883152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.884371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.913484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h22\" (UniqueName: \"kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22\") pod \"horizon-8bb8556c5-xlqnb\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:05 crc kubenswrapper[4772]: I0124 03:59:05.974676 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:06 crc kubenswrapper[4772]: I0124 03:59:06.334156 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 03:59:06 crc kubenswrapper[4772]: I0124 03:59:06.394912 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 03:59:06 crc kubenswrapper[4772]: W0124 03:59:06.398631 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf87e40bd_c14e_48e3_aa49_9d2d79064781.slice/crio-ba7ef03c3f8bd883af07139d30c0381779594205ba5c6914438a9f6037bc2846 WatchSource:0}: Error finding container ba7ef03c3f8bd883af07139d30c0381779594205ba5c6914438a9f6037bc2846: Status 404 returned error can't find the container with id ba7ef03c3f8bd883af07139d30c0381779594205ba5c6914438a9f6037bc2846 Jan 24 03:59:07 crc kubenswrapper[4772]: I0124 03:59:07.108573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerStarted","Data":"2f5c67adaa7808d8e3291df4d4cd57f31f3b4d818d65ffebd89174c650f9fa99"} Jan 24 03:59:07 crc kubenswrapper[4772]: I0124 03:59:07.109999 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerStarted","Data":"ba7ef03c3f8bd883af07139d30c0381779594205ba5c6914438a9f6037bc2846"} Jan 24 03:59:15 crc kubenswrapper[4772]: I0124 03:59:15.178231 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerStarted","Data":"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed"} Jan 24 03:59:15 crc kubenswrapper[4772]: I0124 03:59:15.180595 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerStarted","Data":"464ec0e84c99cf817ba53728ad7ae6ad152f54b73c6551a71dcf752f1995274c"} Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.191452 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerStarted","Data":"4f68e258c96c122984c82b3b5bd7048e8f8666f967595c4f0dd7d07c13606b66"} Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.193135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerStarted","Data":"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20"} Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.219433 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podStartSLOduration=2.788788714 podStartE2EDuration="11.219410154s" podCreationTimestamp="2026-01-24 03:59:05 +0000 UTC" firstStartedPulling="2026-01-24 03:59:06.338672179 +0000 UTC m=+1043.375762904" lastFinishedPulling="2026-01-24 03:59:14.769293619 +0000 UTC m=+1051.806384344" observedRunningTime="2026-01-24 03:59:16.215103643 +0000 UTC m=+1053.252194368" watchObservedRunningTime="2026-01-24 03:59:16.219410154 +0000 UTC m=+1053.256500889" Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.254646 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podStartSLOduration=2.914645134 podStartE2EDuration="11.25462636s" podCreationTimestamp="2026-01-24 03:59:05 +0000 UTC" firstStartedPulling="2026-01-24 03:59:06.400858238 +0000 UTC m=+1043.437948963" lastFinishedPulling="2026-01-24 03:59:14.740839474 +0000 UTC m=+1051.777930189" observedRunningTime="2026-01-24 03:59:16.252820759 +0000 UTC m=+1053.289911494" watchObservedRunningTime="2026-01-24 03:59:16.25462636 +0000 UTC m=+1053.291717095" Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.900938 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.900985 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.901027 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.901502 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 03:59:16 crc kubenswrapper[4772]: I0124 03:59:16.901566 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d" gracePeriod=600 Jan 24 03:59:17 crc kubenswrapper[4772]: I0124 03:59:17.206158 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d" exitCode=0 Jan 24 03:59:17 crc kubenswrapper[4772]: I0124 03:59:17.206267 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d"} Jan 24 03:59:17 crc kubenswrapper[4772]: I0124 03:59:17.206502 4772 scope.go:117] "RemoveContainer" containerID="c4c73373f46ca383dd1219546a633cd8ad9bb24ea298a230636c1f231a1c6003" Jan 24 03:59:18 crc kubenswrapper[4772]: I0124 03:59:18.215865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8"} Jan 24 03:59:18 crc kubenswrapper[4772]: I0124 03:59:18.490300 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.875525 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.876542 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.877324 4772 prober.go:107] "Probe failed" probeType="Startup" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.85:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.85:8080: connect: connection refused" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.975613 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.975701 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:25 crc kubenswrapper[4772]: I0124 03:59:25.977765 4772 prober.go:107] "Probe failed" probeType="Startup" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.86:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.86:8080: connect: connection refused" Jan 24 03:59:37 crc kubenswrapper[4772]: I0124 03:59:37.676631 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:37 crc kubenswrapper[4772]: I0124 03:59:37.926769 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:39 crc kubenswrapper[4772]: I0124 03:59:39.711352 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 03:59:39 crc kubenswrapper[4772]: I0124 03:59:39.988697 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 03:59:40 crc kubenswrapper[4772]: I0124 03:59:40.050569 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 03:59:40 crc kubenswrapper[4772]: I0124 03:59:40.401467 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon-log" containerID="cri-o://464ec0e84c99cf817ba53728ad7ae6ad152f54b73c6551a71dcf752f1995274c" gracePeriod=30 Jan 24 03:59:40 crc kubenswrapper[4772]: I0124 03:59:40.401542 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" containerID="cri-o://4f68e258c96c122984c82b3b5bd7048e8f8666f967595c4f0dd7d07c13606b66" gracePeriod=30 Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.656417 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-8wb65"] Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.658191 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.661207 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-policy" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.671194 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-8wb65"] Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.729668 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-8wb65"] Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.730536 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data horizon-secret-key kube-api-access-wq8vp logs policy scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" podUID="768406d4-8662-47c7-9250-699dd2c85997" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.745845 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.746109 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon-log" containerID="cri-o://687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed" gracePeriod=30 Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.746231 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" containerID="cri-o://d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20" gracePeriod=30 Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.803789 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.803873 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.803904 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.803938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.804234 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.804697 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq8vp\" (UniqueName: \"kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.905886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906177 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906278 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906381 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq8vp\" (UniqueName: \"kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.906288 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.906385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.906768 4772 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.906352 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.906777 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:43.406715122 +0000 UTC m=+1080.443805917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-scripts" not found Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.906988 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:43.406976239 +0000 UTC m=+1080.444066965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : secret "horizon" not found Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.907003 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:43.4069953 +0000 UTC m=+1080.444086025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-config-data" not found Jan 24 03:59:42 crc kubenswrapper[4772]: I0124 03:59:42.907018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.910457 4772 projected.go:194] Error preparing data for projected volume kube-api-access-wq8vp for pod horizon-kuttl-tests/horizon-845cfdcdb-8wb65: failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:42 crc kubenswrapper[4772]: E0124 03:59:42.910688 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:43.410680954 +0000 UTC m=+1080.447771679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wq8vp" (UniqueName: "kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.414250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.414328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414386 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.414428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq8vp\" (UniqueName: \"kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414497 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:44.414467832 +0000 UTC m=+1081.451558597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-scripts" not found Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.414546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414493 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414633 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:44.414611416 +0000 UTC m=+1081.451702221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-config-data" not found Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414712 4772 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.414794 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:44.414772241 +0000 UTC m=+1081.451862966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : secret "horizon" not found Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.417771 4772 projected.go:194] Error preparing data for projected volume kube-api-access-wq8vp for pod horizon-kuttl-tests/horizon-845cfdcdb-8wb65: failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:43 crc kubenswrapper[4772]: E0124 03:59:43.417833 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:44.417820227 +0000 UTC m=+1081.454911052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wq8vp" (UniqueName: "kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.422976 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.431883 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.616727 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy\") pod \"768406d4-8662-47c7-9250-699dd2c85997\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.616998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs\") pod \"768406d4-8662-47c7-9250-699dd2c85997\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.617076 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy" (OuterVolumeSpecName: "policy") pod "768406d4-8662-47c7-9250-699dd2c85997" (UID: "768406d4-8662-47c7-9250-699dd2c85997"). InnerVolumeSpecName "policy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.617292 4772 reconciler_common.go:293] "Volume detached for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-policy\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.617303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs" (OuterVolumeSpecName: "logs") pod "768406d4-8662-47c7-9250-699dd2c85997" (UID: "768406d4-8662-47c7-9250-699dd2c85997"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 03:59:43 crc kubenswrapper[4772]: I0124 03:59:43.718668 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/768406d4-8662-47c7-9250-699dd2c85997-logs\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.429486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq8vp\" (UniqueName: \"kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.429999 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.430058 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.430145 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data\") pod \"horizon-845cfdcdb-8wb65\" (UID: \"768406d4-8662-47c7-9250-699dd2c85997\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430165 4772 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430228 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:46.430208632 +0000 UTC m=+1083.467299357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : secret "horizon" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430244 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430322 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:46.430297304 +0000 UTC m=+1083.467388089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-scripts" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430363 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.430461 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:46.430431998 +0000 UTC m=+1083.467522793 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : configmap "horizon-config-data" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.433914 4772 projected.go:194] Error preparing data for projected volume kube-api-access-wq8vp for pod horizon-kuttl-tests/horizon-845cfdcdb-8wb65: failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:44 crc kubenswrapper[4772]: E0124 03:59:44.433999 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp podName:768406d4-8662-47c7-9250-699dd2c85997 nodeName:}" failed. No retries permitted until 2026-01-24 03:59:46.433976358 +0000 UTC m=+1083.471067163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wq8vp" (UniqueName: "kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp") pod "horizon-845cfdcdb-8wb65" (UID: "768406d4-8662-47c7-9250-699dd2c85997") : failed to fetch token: serviceaccounts "horizon-horizon" not found Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.438118 4772 generic.go:334] "Generic (PLEG): container finished" podID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerID="4f68e258c96c122984c82b3b5bd7048e8f8666f967595c4f0dd7d07c13606b66" exitCode=0 Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.438185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerDied","Data":"4f68e258c96c122984c82b3b5bd7048e8f8666f967595c4f0dd7d07c13606b66"} Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.438262 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-8wb65" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.501280 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-8wb65"] Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.512291 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-8wb65"] Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.634360 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.634413 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq8vp\" (UniqueName: \"kubernetes.io/projected/768406d4-8662-47c7-9250-699dd2c85997-kube-api-access-wq8vp\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.634433 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/768406d4-8662-47c7-9250-699dd2c85997-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:44 crc kubenswrapper[4772]: I0124 03:59:44.634451 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/768406d4-8662-47c7-9250-699dd2c85997-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 03:59:45 crc kubenswrapper[4772]: I0124 03:59:45.669863 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768406d4-8662-47c7-9250-699dd2c85997" path="/var/lib/kubelet/pods/768406d4-8662-47c7-9250-699dd2c85997/volumes" Jan 24 03:59:45 crc kubenswrapper[4772]: I0124 03:59:45.875448 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.85:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.85:8080: connect: connection refused" Jan 24 03:59:45 crc kubenswrapper[4772]: I0124 03:59:45.976041 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.86:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.86:8080: connect: connection refused" Jan 24 03:59:46 crc kubenswrapper[4772]: I0124 03:59:46.459965 4772 generic.go:334] "Generic (PLEG): container finished" podID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerID="d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20" exitCode=0 Jan 24 03:59:46 crc kubenswrapper[4772]: I0124 03:59:46.460021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerDied","Data":"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20"} Jan 24 03:59:55 crc kubenswrapper[4772]: I0124 03:59:55.875458 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.85:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.85:8080: connect: connection refused" Jan 24 03:59:55 crc kubenswrapper[4772]: I0124 03:59:55.975488 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.86:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.86:8080: connect: connection refused" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.186247 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2"] Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.188222 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.191645 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.191767 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.206836 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2"] Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.295506 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzd75\" (UniqueName: \"kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.295667 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.295910 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.397807 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.397941 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzd75\" (UniqueName: \"kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.398049 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.398881 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.408439 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.415261 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzd75\" (UniqueName: \"kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75\") pod \"collect-profiles-29487120-fk8c2\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.516467 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:00 crc kubenswrapper[4772]: I0124 04:00:00.769016 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2"] Jan 24 04:00:01 crc kubenswrapper[4772]: I0124 04:00:01.599407 4772 generic.go:334] "Generic (PLEG): container finished" podID="def5cfbc-d4eb-4cd1-9222-ee936f51b740" containerID="b1285dafc2b0eb7412ab1ad3fdda3f7d530a0223611ce144424e51c6c1f5f17a" exitCode=0 Jan 24 04:00:01 crc kubenswrapper[4772]: I0124 04:00:01.599464 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" event={"ID":"def5cfbc-d4eb-4cd1-9222-ee936f51b740","Type":"ContainerDied","Data":"b1285dafc2b0eb7412ab1ad3fdda3f7d530a0223611ce144424e51c6c1f5f17a"} Jan 24 04:00:01 crc kubenswrapper[4772]: I0124 04:00:01.599681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" event={"ID":"def5cfbc-d4eb-4cd1-9222-ee936f51b740","Type":"ContainerStarted","Data":"553625c589a917f191c276783b9c2f33687cae653915d87798cd71c1bf187177"} Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.050089 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.165180 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzd75\" (UniqueName: \"kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75\") pod \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.165220 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume\") pod \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.165268 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume\") pod \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\" (UID: \"def5cfbc-d4eb-4cd1-9222-ee936f51b740\") " Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.166437 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume" (OuterVolumeSpecName: "config-volume") pod "def5cfbc-d4eb-4cd1-9222-ee936f51b740" (UID: "def5cfbc-d4eb-4cd1-9222-ee936f51b740"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.173915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75" (OuterVolumeSpecName: "kube-api-access-lzd75") pod "def5cfbc-d4eb-4cd1-9222-ee936f51b740" (UID: "def5cfbc-d4eb-4cd1-9222-ee936f51b740"). InnerVolumeSpecName "kube-api-access-lzd75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.173997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "def5cfbc-d4eb-4cd1-9222-ee936f51b740" (UID: "def5cfbc-d4eb-4cd1-9222-ee936f51b740"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.266992 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzd75\" (UniqueName: \"kubernetes.io/projected/def5cfbc-d4eb-4cd1-9222-ee936f51b740-kube-api-access-lzd75\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.267402 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def5cfbc-d4eb-4cd1-9222-ee936f51b740-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.267417 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def5cfbc-d4eb-4cd1-9222-ee936f51b740-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.619837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" event={"ID":"def5cfbc-d4eb-4cd1-9222-ee936f51b740","Type":"ContainerDied","Data":"553625c589a917f191c276783b9c2f33687cae653915d87798cd71c1bf187177"} Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.619889 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553625c589a917f191c276783b9c2f33687cae653915d87798cd71c1bf187177" Jan 24 04:00:03 crc kubenswrapper[4772]: I0124 04:00:03.619918 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487120-fk8c2" Jan 24 04:00:05 crc kubenswrapper[4772]: I0124 04:00:05.876471 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.85:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.85:8080: connect: connection refused" Jan 24 04:00:05 crc kubenswrapper[4772]: I0124 04:00:05.877025 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 04:00:05 crc kubenswrapper[4772]: I0124 04:00:05.975775 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.86:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.86:8080: connect: connection refused" Jan 24 04:00:05 crc kubenswrapper[4772]: I0124 04:00:05.975957 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.677207 4772 generic.go:334] "Generic (PLEG): container finished" podID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerID="464ec0e84c99cf817ba53728ad7ae6ad152f54b73c6551a71dcf752f1995274c" exitCode=137 Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.677359 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerDied","Data":"464ec0e84c99cf817ba53728ad7ae6ad152f54b73c6551a71dcf752f1995274c"} Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.792118 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.943012 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrw7z\" (UniqueName: \"kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z\") pod \"f912cd99-d531-4af0-8c94-0f10ab3b5503\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.943074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key\") pod \"f912cd99-d531-4af0-8c94-0f10ab3b5503\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.943101 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts\") pod \"f912cd99-d531-4af0-8c94-0f10ab3b5503\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.943182 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs\") pod \"f912cd99-d531-4af0-8c94-0f10ab3b5503\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.943208 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data\") pod \"f912cd99-d531-4af0-8c94-0f10ab3b5503\" (UID: \"f912cd99-d531-4af0-8c94-0f10ab3b5503\") " Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.944434 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs" (OuterVolumeSpecName: "logs") pod "f912cd99-d531-4af0-8c94-0f10ab3b5503" (UID: "f912cd99-d531-4af0-8c94-0f10ab3b5503"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.948717 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z" (OuterVolumeSpecName: "kube-api-access-jrw7z") pod "f912cd99-d531-4af0-8c94-0f10ab3b5503" (UID: "f912cd99-d531-4af0-8c94-0f10ab3b5503"). InnerVolumeSpecName "kube-api-access-jrw7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.949523 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f912cd99-d531-4af0-8c94-0f10ab3b5503" (UID: "f912cd99-d531-4af0-8c94-0f10ab3b5503"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.960460 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts" (OuterVolumeSpecName: "scripts") pod "f912cd99-d531-4af0-8c94-0f10ab3b5503" (UID: "f912cd99-d531-4af0-8c94-0f10ab3b5503"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:00:10 crc kubenswrapper[4772]: I0124 04:00:10.975798 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data" (OuterVolumeSpecName: "config-data") pod "f912cd99-d531-4af0-8c94-0f10ab3b5503" (UID: "f912cd99-d531-4af0-8c94-0f10ab3b5503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.044354 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrw7z\" (UniqueName: \"kubernetes.io/projected/f912cd99-d531-4af0-8c94-0f10ab3b5503-kube-api-access-jrw7z\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.044385 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f912cd99-d531-4af0-8c94-0f10ab3b5503-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.044394 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.044402 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f912cd99-d531-4af0-8c94-0f10ab3b5503-logs\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.044410 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f912cd99-d531-4af0-8c94-0f10ab3b5503-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.689982 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" event={"ID":"f912cd99-d531-4af0-8c94-0f10ab3b5503","Type":"ContainerDied","Data":"2f5c67adaa7808d8e3291df4d4cd57f31f3b4d818d65ffebd89174c650f9fa99"} Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.690085 4772 scope.go:117] "RemoveContainer" containerID="4f68e258c96c122984c82b3b5bd7048e8f8666f967595c4f0dd7d07c13606b66" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.690090 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-rcnpp" Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.759919 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.771279 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-rcnpp"] Jan 24 04:00:11 crc kubenswrapper[4772]: I0124 04:00:11.896154 4772 scope.go:117] "RemoveContainer" containerID="464ec0e84c99cf817ba53728ad7ae6ad152f54b73c6551a71dcf752f1995274c" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.127349 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.187967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts\") pod \"f87e40bd-c14e-48e3-aa49-9d2d79064781\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.188042 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7h22\" (UniqueName: \"kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22\") pod \"f87e40bd-c14e-48e3-aa49-9d2d79064781\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.188073 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key\") pod \"f87e40bd-c14e-48e3-aa49-9d2d79064781\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.188100 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data\") pod \"f87e40bd-c14e-48e3-aa49-9d2d79064781\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.188149 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs\") pod \"f87e40bd-c14e-48e3-aa49-9d2d79064781\" (UID: \"f87e40bd-c14e-48e3-aa49-9d2d79064781\") " Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.188993 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs" (OuterVolumeSpecName: "logs") pod "f87e40bd-c14e-48e3-aa49-9d2d79064781" (UID: "f87e40bd-c14e-48e3-aa49-9d2d79064781"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.195405 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22" (OuterVolumeSpecName: "kube-api-access-q7h22") pod "f87e40bd-c14e-48e3-aa49-9d2d79064781" (UID: "f87e40bd-c14e-48e3-aa49-9d2d79064781"). InnerVolumeSpecName "kube-api-access-q7h22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.195474 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f87e40bd-c14e-48e3-aa49-9d2d79064781" (UID: "f87e40bd-c14e-48e3-aa49-9d2d79064781"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.206527 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts" (OuterVolumeSpecName: "scripts") pod "f87e40bd-c14e-48e3-aa49-9d2d79064781" (UID: "f87e40bd-c14e-48e3-aa49-9d2d79064781"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.211507 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data" (OuterVolumeSpecName: "config-data") pod "f87e40bd-c14e-48e3-aa49-9d2d79064781" (UID: "f87e40bd-c14e-48e3-aa49-9d2d79064781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.289576 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.289622 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7h22\" (UniqueName: \"kubernetes.io/projected/f87e40bd-c14e-48e3-aa49-9d2d79064781-kube-api-access-q7h22\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.289641 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f87e40bd-c14e-48e3-aa49-9d2d79064781-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.289656 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f87e40bd-c14e-48e3-aa49-9d2d79064781-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.289667 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87e40bd-c14e-48e3-aa49-9d2d79064781-logs\") on node \"crc\" DevicePath \"\"" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.666257 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" path="/var/lib/kubelet/pods/f912cd99-d531-4af0-8c94-0f10ab3b5503/volumes" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.704754 4772 generic.go:334] "Generic (PLEG): container finished" podID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerID="687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed" exitCode=137 Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.704807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerDied","Data":"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed"} Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.704840 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" event={"ID":"f87e40bd-c14e-48e3-aa49-9d2d79064781","Type":"ContainerDied","Data":"ba7ef03c3f8bd883af07139d30c0381779594205ba5c6914438a9f6037bc2846"} Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.704860 4772 scope.go:117] "RemoveContainer" containerID="d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.704969 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-xlqnb" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.750810 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.761309 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-xlqnb"] Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.882992 4772 scope.go:117] "RemoveContainer" containerID="687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.895350 4772 scope.go:117] "RemoveContainer" containerID="d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20" Jan 24 04:00:13 crc kubenswrapper[4772]: E0124 04:00:13.897020 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20\": container with ID starting with d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20 not found: ID does not exist" containerID="d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.897153 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20"} err="failed to get container status \"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20\": rpc error: code = NotFound desc = could not find container \"d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20\": container with ID starting with d67ed57a445deeae8b5f313a6d2c6e7eb71898e7d64af7fd3f4d6527b0be9e20 not found: ID does not exist" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.897262 4772 scope.go:117] "RemoveContainer" containerID="687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed" Jan 24 04:00:13 crc kubenswrapper[4772]: E0124 04:00:13.897666 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed\": container with ID starting with 687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed not found: ID does not exist" containerID="687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed" Jan 24 04:00:13 crc kubenswrapper[4772]: I0124 04:00:13.897710 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed"} err="failed to get container status \"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed\": rpc error: code = NotFound desc = could not find container \"687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed\": container with ID starting with 687c08bf06bb30b2beb89bf72dad331ffcc18702c3a405b932917d5d2f98a5ed not found: ID does not exist" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.936764 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:00:14 crc kubenswrapper[4772]: E0124 04:00:14.938304 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def5cfbc-d4eb-4cd1-9222-ee936f51b740" containerName="collect-profiles" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.938417 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="def5cfbc-d4eb-4cd1-9222-ee936f51b740" containerName="collect-profiles" Jan 24 04:00:14 crc kubenswrapper[4772]: E0124 04:00:14.938513 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.938593 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: E0124 04:00:14.938679 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.938770 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: E0124 04:00:14.938849 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.938942 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: E0124 04:00:14.939031 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939110 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939323 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939407 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon-log" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939479 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939550 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="def5cfbc-d4eb-4cd1-9222-ee936f51b740" containerName="collect-profiles" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.939754 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f912cd99-d531-4af0-8c94-0f10ab3b5503" containerName="horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.940705 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.942870 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.943285 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon-horizon-dockercfg-zq9n8" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.943541 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"combined-ca-bundle" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.944081 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-scripts" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.944217 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"cert-horizon-svc" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.946040 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-config-data" Jan 24 04:00:14 crc kubenswrapper[4772]: I0124 04:00:14.964782 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.026755 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.028576 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.082148 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.115614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmgw\" (UniqueName: \"kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.115696 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.115828 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.115934 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.116132 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.116405 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.116448 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.217854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.217929 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.217953 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.217974 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.217989 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218011 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr92s\" (UniqueName: \"kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218157 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218181 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmgw\" (UniqueName: \"kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218206 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218223 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.218360 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.219210 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.219687 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.222761 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.222971 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.223681 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.233348 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmgw\" (UniqueName: \"kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw\") pod \"horizon-5b545c459d-ssszp\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.312483 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319468 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr92s\" (UniqueName: \"kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319625 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.319651 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.320442 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.320458 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.321133 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.323579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.324327 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.324348 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.336793 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr92s\" (UniqueName: \"kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s\") pod \"horizon-579fd4dcd4-gkxjh\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.377376 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.667186 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87e40bd-c14e-48e3-aa49-9d2d79064781" path="/var/lib/kubelet/pods/f87e40bd-c14e-48e3-aa49-9d2d79064781/volumes" Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.766342 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:00:15 crc kubenswrapper[4772]: W0124 04:00:15.767849 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f475a93_1295_4f4a_a338_1742cfc5d13b.slice/crio-5dd84e8b05270ce8d5c493b4500122e77f086a7a437ceca258dff1327af8d4bb WatchSource:0}: Error finding container 5dd84e8b05270ce8d5c493b4500122e77f086a7a437ceca258dff1327af8d4bb: Status 404 returned error can't find the container with id 5dd84e8b05270ce8d5c493b4500122e77f086a7a437ceca258dff1327af8d4bb Jan 24 04:00:15 crc kubenswrapper[4772]: I0124 04:00:15.807390 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:00:15 crc kubenswrapper[4772]: W0124 04:00:15.812593 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7dcd704_8124_41c4_8bab_678fd9d02804.slice/crio-015798b81edb1eb7e86d4eb52b6e8180246dd6482b3f5e05ac3991fcfc9f2839 WatchSource:0}: Error finding container 015798b81edb1eb7e86d4eb52b6e8180246dd6482b3f5e05ac3991fcfc9f2839: Status 404 returned error can't find the container with id 015798b81edb1eb7e86d4eb52b6e8180246dd6482b3f5e05ac3991fcfc9f2839 Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.745839 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerStarted","Data":"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.746317 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerStarted","Data":"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.746334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerStarted","Data":"5dd84e8b05270ce8d5c493b4500122e77f086a7a437ceca258dff1327af8d4bb"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.749262 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerStarted","Data":"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.749293 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerStarted","Data":"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.749308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerStarted","Data":"015798b81edb1eb7e86d4eb52b6e8180246dd6482b3f5e05ac3991fcfc9f2839"} Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.782135 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podStartSLOduration=2.7820963450000002 podStartE2EDuration="2.782096345s" podCreationTimestamp="2026-01-24 04:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 04:00:16.770834287 +0000 UTC m=+1113.807925012" watchObservedRunningTime="2026-01-24 04:00:16.782096345 +0000 UTC m=+1113.819187110" Jan 24 04:00:16 crc kubenswrapper[4772]: I0124 04:00:16.796810 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" podStartSLOduration=1.7967972909999999 podStartE2EDuration="1.796797291s" podCreationTimestamp="2026-01-24 04:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 04:00:16.794390733 +0000 UTC m=+1113.831481458" watchObservedRunningTime="2026-01-24 04:00:16.796797291 +0000 UTC m=+1113.833888016" Jan 24 04:00:25 crc kubenswrapper[4772]: I0124 04:00:25.313414 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:25 crc kubenswrapper[4772]: I0124 04:00:25.314132 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:25 crc kubenswrapper[4772]: I0124 04:00:25.377973 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:25 crc kubenswrapper[4772]: I0124 04:00:25.378108 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:00:35 crc kubenswrapper[4772]: I0124 04:00:35.380215 4772 prober.go:107] "Probe failed" probeType="Startup" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.90:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.90:8443: connect: connection refused" Jan 24 04:00:37 crc kubenswrapper[4772]: I0124 04:00:37.013426 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:38 crc kubenswrapper[4772]: I0124 04:00:38.667677 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.426123 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.427110 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" containerID="cri-o://9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea" gracePeriod=30 Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.427334 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon-log" containerID="cri-o://d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854" gracePeriod=30 Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.447986 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.448235 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon-log" containerID="cri-o://1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730" gracePeriod=30 Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.448365 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon" containerID="cri-o://721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0" gracePeriod=30 Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.943564 4772 generic.go:334] "Generic (PLEG): container finished" podID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerID="721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0" exitCode=0 Jan 24 04:00:39 crc kubenswrapper[4772]: I0124 04:00:39.943675 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerDied","Data":"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0"} Jan 24 04:00:42 crc kubenswrapper[4772]: I0124 04:00:42.981951 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerID="9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea" exitCode=0 Jan 24 04:00:42 crc kubenswrapper[4772]: I0124 04:00:42.982041 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerDied","Data":"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea"} Jan 24 04:00:45 crc kubenswrapper[4772]: I0124 04:00:45.313990 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.89:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.89:8443: connect: connection refused" Jan 24 04:00:55 crc kubenswrapper[4772]: I0124 04:00:55.314244 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.89:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.89:8443: connect: connection refused" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.167647 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-cron-29487121-k9qk5"] Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.170110 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.184274 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-cron-29487121-k9qk5"] Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.278473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.278527 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.278601 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xvd\" (UniqueName: \"kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.379964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.380008 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.380063 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xvd\" (UniqueName: \"kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.386425 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.388599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.399306 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xvd\" (UniqueName: \"kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd\") pod \"keystone-cron-29487121-k9qk5\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.503352 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:00 crc kubenswrapper[4772]: I0124 04:01:00.981340 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-cron-29487121-k9qk5"] Jan 24 04:01:01 crc kubenswrapper[4772]: I0124 04:01:01.165920 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" event={"ID":"e6f563d9-652b-498a-9b1e-503beadc1bfc","Type":"ContainerStarted","Data":"7166289c9d9c0aa48229b8c924e78c98e3670c5ae79b3c78203f97b25849bc8c"} Jan 24 04:01:01 crc kubenswrapper[4772]: I0124 04:01:01.165989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" event={"ID":"e6f563d9-652b-498a-9b1e-503beadc1bfc","Type":"ContainerStarted","Data":"4ee5d94e3e493b5602a5a95d61003b66b007c83c646bc5437837865ac4456ed2"} Jan 24 04:01:01 crc kubenswrapper[4772]: I0124 04:01:01.185452 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" podStartSLOduration=1.185433622 podStartE2EDuration="1.185433622s" podCreationTimestamp="2026-01-24 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 04:01:01.180908284 +0000 UTC m=+1158.217999009" watchObservedRunningTime="2026-01-24 04:01:01.185433622 +0000 UTC m=+1158.222524367" Jan 24 04:01:03 crc kubenswrapper[4772]: I0124 04:01:03.188535 4772 generic.go:334] "Generic (PLEG): container finished" podID="e6f563d9-652b-498a-9b1e-503beadc1bfc" containerID="7166289c9d9c0aa48229b8c924e78c98e3670c5ae79b3c78203f97b25849bc8c" exitCode=0 Jan 24 04:01:03 crc kubenswrapper[4772]: I0124 04:01:03.188797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" event={"ID":"e6f563d9-652b-498a-9b1e-503beadc1bfc","Type":"ContainerDied","Data":"7166289c9d9c0aa48229b8c924e78c98e3670c5ae79b3c78203f97b25849bc8c"} Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.539203 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.662067 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data\") pod \"e6f563d9-652b-498a-9b1e-503beadc1bfc\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.662138 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xvd\" (UniqueName: \"kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd\") pod \"e6f563d9-652b-498a-9b1e-503beadc1bfc\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.662350 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys\") pod \"e6f563d9-652b-498a-9b1e-503beadc1bfc\" (UID: \"e6f563d9-652b-498a-9b1e-503beadc1bfc\") " Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.667231 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd" (OuterVolumeSpecName: "kube-api-access-q2xvd") pod "e6f563d9-652b-498a-9b1e-503beadc1bfc" (UID: "e6f563d9-652b-498a-9b1e-503beadc1bfc"). InnerVolumeSpecName "kube-api-access-q2xvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.667802 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e6f563d9-652b-498a-9b1e-503beadc1bfc" (UID: "e6f563d9-652b-498a-9b1e-503beadc1bfc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.708617 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data" (OuterVolumeSpecName: "config-data") pod "e6f563d9-652b-498a-9b1e-503beadc1bfc" (UID: "e6f563d9-652b-498a-9b1e-503beadc1bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.764918 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.764952 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2xvd\" (UniqueName: \"kubernetes.io/projected/e6f563d9-652b-498a-9b1e-503beadc1bfc-kube-api-access-q2xvd\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:04 crc kubenswrapper[4772]: I0124 04:01:04.764963 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f563d9-652b-498a-9b1e-503beadc1bfc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:05 crc kubenswrapper[4772]: I0124 04:01:05.208911 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" event={"ID":"e6f563d9-652b-498a-9b1e-503beadc1bfc","Type":"ContainerDied","Data":"4ee5d94e3e493b5602a5a95d61003b66b007c83c646bc5437837865ac4456ed2"} Jan 24 04:01:05 crc kubenswrapper[4772]: I0124 04:01:05.209379 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee5d94e3e493b5602a5a95d61003b66b007c83c646bc5437837865ac4456ed2" Jan 24 04:01:05 crc kubenswrapper[4772]: I0124 04:01:05.209004 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-cron-29487121-k9qk5" Jan 24 04:01:05 crc kubenswrapper[4772]: I0124 04:01:05.313598 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.89:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.89:8443: connect: connection refused" Jan 24 04:01:05 crc kubenswrapper[4772]: I0124 04:01:05.313811 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:01:09 crc kubenswrapper[4772]: E0124 04:01:09.657071 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7dcd704_8124_41c4_8bab_678fd9d02804.slice/crio-1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7dcd704_8124_41c4_8bab_678fd9d02804.slice/crio-conmon-1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730.scope\": RecentStats: unable to find data in memory cache]" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.855450 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.861330 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955318 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs" (OuterVolumeSpecName: "logs") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955461 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955586 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr92s\" (UniqueName: \"kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955627 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955684 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955732 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkmgw\" (UniqueName: \"kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955770 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955805 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.955865 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.957167 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.957199 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.957220 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data\") pod \"d7dcd704-8124-41c4-8bab-678fd9d02804\" (UID: \"d7dcd704-8124-41c4-8bab-678fd9d02804\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.957243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.957262 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle\") pod \"3f475a93-1295-4f4a-a338-1742cfc5d13b\" (UID: \"3f475a93-1295-4f4a-a338-1742cfc5d13b\") " Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.958029 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f475a93-1295-4f4a-a338-1742cfc5d13b-logs\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.958015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs" (OuterVolumeSpecName: "logs") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.963247 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.963301 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw" (OuterVolumeSpecName: "kube-api-access-dkmgw") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "kube-api-access-dkmgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.963917 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.975159 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts" (OuterVolumeSpecName: "scripts") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.975959 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s" (OuterVolumeSpecName: "kube-api-access-mr92s") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "kube-api-access-mr92s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.977490 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data" (OuterVolumeSpecName: "config-data") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.977771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.979297 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.981020 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data" (OuterVolumeSpecName: "config-data") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.981965 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts" (OuterVolumeSpecName: "scripts") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.994608 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d7dcd704-8124-41c4-8bab-678fd9d02804" (UID: "d7dcd704-8124-41c4-8bab-678fd9d02804"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:09 crc kubenswrapper[4772]: I0124 04:01:09.997149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3f475a93-1295-4f4a-a338-1742cfc5d13b" (UID: "3f475a93-1295-4f4a-a338-1742cfc5d13b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059180 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059459 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr92s\" (UniqueName: \"kubernetes.io/projected/d7dcd704-8124-41c4-8bab-678fd9d02804-kube-api-access-mr92s\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059484 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059496 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059508 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkmgw\" (UniqueName: \"kubernetes.io/projected/3f475a93-1295-4f4a-a338-1742cfc5d13b-kube-api-access-dkmgw\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059517 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059526 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059536 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7dcd704-8124-41c4-8bab-678fd9d02804-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059545 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7dcd704-8124-41c4-8bab-678fd9d02804-logs\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059583 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059592 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f475a93-1295-4f4a-a338-1742cfc5d13b-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059601 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7dcd704-8124-41c4-8bab-678fd9d02804-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.059609 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f475a93-1295-4f4a-a338-1742cfc5d13b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.254972 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerID="d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854" exitCode=137 Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.255005 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.255027 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerDied","Data":"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854"} Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.255572 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-ssszp" event={"ID":"3f475a93-1295-4f4a-a338-1742cfc5d13b","Type":"ContainerDied","Data":"5dd84e8b05270ce8d5c493b4500122e77f086a7a437ceca258dff1327af8d4bb"} Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.255597 4772 scope.go:117] "RemoveContainer" containerID="9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.258875 4772 generic.go:334] "Generic (PLEG): container finished" podID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerID="1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730" exitCode=137 Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.258913 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerDied","Data":"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730"} Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.258938 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.258965 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh" event={"ID":"d7dcd704-8124-41c4-8bab-678fd9d02804","Type":"ContainerDied","Data":"015798b81edb1eb7e86d4eb52b6e8180246dd6482b3f5e05ac3991fcfc9f2839"} Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.305800 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.321212 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-ssszp"] Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.329087 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.335139 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-gkxjh"] Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.433398 4772 scope.go:117] "RemoveContainer" containerID="d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.447515 4772 scope.go:117] "RemoveContainer" containerID="9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea" Jan 24 04:01:10 crc kubenswrapper[4772]: E0124 04:01:10.448251 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea\": container with ID starting with 9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea not found: ID does not exist" containerID="9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.448303 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea"} err="failed to get container status \"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea\": rpc error: code = NotFound desc = could not find container \"9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea\": container with ID starting with 9fe90df9e7bb06d89557bf5ee4896be963f29d0c38f2e5659cde9e71f8eb4eea not found: ID does not exist" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.448338 4772 scope.go:117] "RemoveContainer" containerID="d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854" Jan 24 04:01:10 crc kubenswrapper[4772]: E0124 04:01:10.448801 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854\": container with ID starting with d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854 not found: ID does not exist" containerID="d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.448849 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854"} err="failed to get container status \"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854\": rpc error: code = NotFound desc = could not find container \"d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854\": container with ID starting with d5370f28954f8bc51e3870614aacf9a62505bae1c92569dfecaf7b940c688854 not found: ID does not exist" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.448881 4772 scope.go:117] "RemoveContainer" containerID="721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.610831 4772 scope.go:117] "RemoveContainer" containerID="1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.626342 4772 scope.go:117] "RemoveContainer" containerID="721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0" Jan 24 04:01:10 crc kubenswrapper[4772]: E0124 04:01:10.626855 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0\": container with ID starting with 721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0 not found: ID does not exist" containerID="721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.626892 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0"} err="failed to get container status \"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0\": rpc error: code = NotFound desc = could not find container \"721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0\": container with ID starting with 721a6ae5d760615a294a8cac684ac3963d1f3eadc684f5f236b5733d774716e0 not found: ID does not exist" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.626919 4772 scope.go:117] "RemoveContainer" containerID="1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730" Jan 24 04:01:10 crc kubenswrapper[4772]: E0124 04:01:10.627165 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730\": container with ID starting with 1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730 not found: ID does not exist" containerID="1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730" Jan 24 04:01:10 crc kubenswrapper[4772]: I0124 04:01:10.627188 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730"} err="failed to get container status \"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730\": rpc error: code = NotFound desc = could not find container \"1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730\": container with ID starting with 1bfba276b80cf04085b31bf0fc79bbb8d7e8fcafdd39ce8eea1b6f3359ed0730 not found: ID does not exist" Jan 24 04:01:11 crc kubenswrapper[4772]: I0124 04:01:11.673992 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" path="/var/lib/kubelet/pods/3f475a93-1295-4f4a-a338-1742cfc5d13b/volumes" Jan 24 04:01:11 crc kubenswrapper[4772]: I0124 04:01:11.675342 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" path="/var/lib/kubelet/pods/d7dcd704-8124-41c4-8bab-678fd9d02804/volumes" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.818938 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-7jv6b"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.825703 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-dptfs"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.833471 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-dptfs"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.840921 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-7jv6b"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.849937 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.850235 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" podUID="15e139ca-fa05-4701-b9d4-e4524f011e5d" containerName="keystone-api" containerID="cri-o://5f1adc023cdb78f59cb5b997da7cd6e791c9ed2b6ee1bbc8e9c823d76db51282" gracePeriod=30 Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.859838 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-cron-29487121-k9qk5"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.864072 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-cron-29487121-k9qk5"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.879609 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:17 crc kubenswrapper[4772]: E0124 04:01:17.879925 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.879950 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: E0124 04:01:17.879976 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.879984 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: E0124 04:01:17.879994 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f563d9-652b-498a-9b1e-503beadc1bfc" containerName="keystone-cron" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880001 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f563d9-652b-498a-9b1e-503beadc1bfc" containerName="keystone-cron" Jan 24 04:01:17 crc kubenswrapper[4772]: E0124 04:01:17.880014 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880021 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: E0124 04:01:17.880040 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880048 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880184 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880197 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f563d9-652b-498a-9b1e-503beadc1bfc" containerName="keystone-cron" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880211 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f475a93-1295-4f4a-a338-1742cfc5d13b" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880226 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880238 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7dcd704-8124-41c4-8bab-678fd9d02804" containerName="horizon-log" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.880712 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.888668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.910311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js25q\" (UniqueName: \"kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:17 crc kubenswrapper[4772]: I0124 04:01:17.910418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.011425 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.011509 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js25q\" (UniqueName: \"kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.012422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.032789 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js25q\" (UniqueName: \"kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q\") pod \"keystone253c-account-delete-h5v7x\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.202326 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.483660 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.662465 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-k8xp2"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.670628 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-k8xp2"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.696490 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8znrw"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.697721 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.699760 4772 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.703710 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.716617 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.732468 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.745956 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8znrw"] Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.761231 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8znrw"] Jan 24 04:01:18 crc kubenswrapper[4772]: E0124 04:01:18.761689 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4dlz7 operator-scripts], unattached volumes=[], failed to process volumes=[kube-api-access-4dlz7 operator-scripts]: context canceled" pod="horizon-kuttl-tests/root-account-create-update-8znrw" podUID="33a88642-bfab-4b37-adfb-f1f354cc070f" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.826596 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlz7\" (UniqueName: \"kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.826668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.851569 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-2" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="galera" containerID="cri-o://a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0" gracePeriod=30 Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.928483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlz7\" (UniqueName: \"kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:18 crc kubenswrapper[4772]: I0124 04:01:18.928600 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:18 crc kubenswrapper[4772]: E0124 04:01:18.928756 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:18 crc kubenswrapper[4772]: E0124 04:01:18.928831 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:19.428807363 +0000 UTC m=+1176.465898088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : configmap "openstack-scripts" not found Jan 24 04:01:18 crc kubenswrapper[4772]: E0124 04:01:18.931387 4772 projected.go:194] Error preparing data for projected volume kube-api-access-4dlz7 for pod horizon-kuttl-tests/root-account-create-update-8znrw: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 24 04:01:18 crc kubenswrapper[4772]: E0124 04:01:18.931456 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7 podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:19.431437266 +0000 UTC m=+1176.468528001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4dlz7" (UniqueName: "kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341016 4772 generic.go:334] "Generic (PLEG): container finished" podID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerID="6ebf9674d82ca95d78fa3396380d530b78e14611aa31bf64387874c474f083cd" exitCode=1 Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341105 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" event={"ID":"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4","Type":"ContainerDied","Data":"6ebf9674d82ca95d78fa3396380d530b78e14611aa31bf64387874c474f083cd"} Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341168 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" event={"ID":"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4","Type":"ContainerStarted","Data":"96ff030422dcd15b6b9828b8a23c881cfff6acafecaeb2dfb96093dbcc6d3c2b"} Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341214 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341629 4772 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" secret="" err="secret \"galera-openstack-dockercfg-xlz4p\" not found" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.341689 4772 scope.go:117] "RemoveContainer" containerID="6ebf9674d82ca95d78fa3396380d530b78e14611aa31bf64387874c474f083cd" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.398404 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.398630 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/memcached-0" podUID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" containerName="memcached" containerID="cri-o://3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a" gracePeriod=30 Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.436099 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.436223 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlz7\" (UniqueName: \"kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.436256 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.436329 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:20.436310181 +0000 UTC m=+1177.473400906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.436353 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.436403 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts podName:4f71f5ee-2df4-4622-82cd-1c3ac17d88b4 nodeName:}" failed. No retries permitted until 2026-01-24 04:01:19.936387334 +0000 UTC m=+1176.973478049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts") pod "keystone253c-account-delete-h5v7x" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4") : configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.442209 4772 projected.go:194] Error preparing data for projected volume kube-api-access-4dlz7 for pod horizon-kuttl-tests/root-account-create-update-8znrw: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.442281 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7 podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:20.442262958 +0000 UTC m=+1177.479353683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4dlz7" (UniqueName: "kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.446600 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.669253 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560ff308-c57e-4dcf-9398-9cc95c8da04a" path="/var/lib/kubelet/pods/560ff308-c57e-4dcf-9398-9cc95c8da04a/volumes" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.670370 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0008c7-d89a-43f7-9469-64d1c8ce54dd" path="/var/lib/kubelet/pods/6a0008c7-d89a-43f7-9469-64d1c8ce54dd/volumes" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.671358 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a651548f-d95b-43ef-ad1a-6c9c2a67b1d9" path="/var/lib/kubelet/pods/a651548f-d95b-43ef-ad1a-6c9c2a67b1d9/volumes" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.672078 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f563d9-652b-498a-9b1e-503beadc1bfc" path="/var/lib/kubelet/pods/e6f563d9-652b-498a-9b1e-503beadc1bfc/volumes" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.747767 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.836584 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a88642_bfab_4b37_adfb_f1f354cc070f.slice\": RecentStats: unable to find data in memory cache]" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845158 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845232 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845513 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pp7s\" (UniqueName: \"kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845586 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.845661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated\") pod \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\" (UID: \"f7ee0cb0-31f8-40c4-baee-a20c07e97162\") " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.847016 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.847776 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.848049 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.848181 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.853484 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s" (OuterVolumeSpecName: "kube-api-access-9pp7s") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "kube-api-access-9pp7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.853610 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.860876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "f7ee0cb0-31f8-40c4-baee-a20c07e97162" (UID: "f7ee0cb0-31f8-40c4-baee-a20c07e97162"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948125 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948279 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.948321 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: E0124 04:01:19.948370 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts podName:4f71f5ee-2df4-4622-82cd-1c3ac17d88b4 nodeName:}" failed. No retries permitted until 2026-01-24 04:01:20.948355656 +0000 UTC m=+1177.985446381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts") pod "keystone253c-account-delete-h5v7x" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4") : configmap "openstack-scripts" not found Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948380 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948433 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948446 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pp7s\" (UniqueName: \"kubernetes.io/projected/f7ee0cb0-31f8-40c4-baee-a20c07e97162-kube-api-access-9pp7s\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.948459 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7ee0cb0-31f8-40c4-baee-a20c07e97162-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:19 crc kubenswrapper[4772]: I0124 04:01:19.961457 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.049759 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.272278 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.350928 4772 generic.go:334] "Generic (PLEG): container finished" podID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerID="9656b6bcb329b3b5e78795747c229ab7fbed50b5340ece5752b913589aa71b06" exitCode=1 Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.351010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" event={"ID":"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4","Type":"ContainerDied","Data":"9656b6bcb329b3b5e78795747c229ab7fbed50b5340ece5752b913589aa71b06"} Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.351050 4772 scope.go:117] "RemoveContainer" containerID="6ebf9674d82ca95d78fa3396380d530b78e14611aa31bf64387874c474f083cd" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.351702 4772 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" secret="" err="secret \"galera-openstack-dockercfg-xlz4p\" not found" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.351795 4772 scope.go:117] "RemoveContainer" containerID="9656b6bcb329b3b5e78795747c229ab7fbed50b5340ece5752b913589aa71b06" Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.352120 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone253c-account-delete-h5v7x_horizon-kuttl-tests(4f71f5ee-2df4-4622-82cd-1c3ac17d88b4)\"" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.353990 4772 generic.go:334] "Generic (PLEG): container finished" podID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerID="a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0" exitCode=0 Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.354030 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.354083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerDied","Data":"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0"} Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.354126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"f7ee0cb0-31f8-40c4-baee-a20c07e97162","Type":"ContainerDied","Data":"f942cd6c5b2678c423eae0f5b7a98ffc24c3bfc2016d3db12751c828dad8dab1"} Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.354152 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.389905 4772 scope.go:117] "RemoveContainer" containerID="a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.408432 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/rabbitmq-server-0" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="rabbitmq" containerID="cri-o://4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2" gracePeriod=604800 Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.419827 4772 scope.go:117] "RemoveContainer" containerID="aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.449909 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8znrw"] Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.456707 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlz7\" (UniqueName: \"kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.457005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts\") pod \"root-account-create-update-8znrw\" (UID: \"33a88642-bfab-4b37-adfb-f1f354cc070f\") " pod="horizon-kuttl-tests/root-account-create-update-8znrw" Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.457188 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.457243 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:22.457224932 +0000 UTC m=+1179.494315657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : configmap "openstack-scripts" not found Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.457485 4772 scope.go:117] "RemoveContainer" containerID="a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0" Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.458236 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0\": container with ID starting with a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0 not found: ID does not exist" containerID="a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.458273 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0"} err="failed to get container status \"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0\": rpc error: code = NotFound desc = could not find container \"a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0\": container with ID starting with a75712542e56325c2edeff27dac228f7bc24647e7d0224909afc03888e8776d0 not found: ID does not exist" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.458292 4772 scope.go:117] "RemoveContainer" containerID="aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e" Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.458512 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e\": container with ID starting with aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e not found: ID does not exist" containerID="aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.458533 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e"} err="failed to get container status \"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e\": rpc error: code = NotFound desc = could not find container \"aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e\": container with ID starting with aac4462c4fb61198d3e545a9a4da3d1300afaa6d8e64fd745c9720525456f62e not found: ID does not exist" Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.459700 4772 projected.go:194] Error preparing data for projected volume kube-api-access-4dlz7 for pod horizon-kuttl-tests/root-account-create-update-8znrw: failed to fetch token: pod "root-account-create-update-8znrw" not found Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.459763 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7 podName:33a88642-bfab-4b37-adfb-f1f354cc070f nodeName:}" failed. No retries permitted until 2026-01-24 04:01:22.459728242 +0000 UTC m=+1179.496818967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4dlz7" (UniqueName: "kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7") pod "root-account-create-update-8znrw" (UID: "33a88642-bfab-4b37-adfb-f1f354cc070f") : failed to fetch token: pod "root-account-create-update-8znrw" not found Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.465387 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8znrw"] Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.472615 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.478494 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.558435 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a88642-bfab-4b37-adfb-f1f354cc070f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.558476 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dlz7\" (UniqueName: \"kubernetes.io/projected/33a88642-bfab-4b37-adfb-f1f354cc070f-kube-api-access-4dlz7\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:20 crc kubenswrapper[4772]: I0124 04:01:20.888217 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-1" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="galera" containerID="cri-o://e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b" gracePeriod=28 Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.964179 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:20 crc kubenswrapper[4772]: E0124 04:01:20.964278 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts podName:4f71f5ee-2df4-4622-82cd-1c3ac17d88b4 nodeName:}" failed. No retries permitted until 2026-01-24 04:01:22.964251057 +0000 UTC m=+1180.001341802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts") pod "keystone253c-account-delete-h5v7x" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4") : configmap "openstack-scripts" not found Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.200958 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.201441 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" podUID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" containerName="manager" containerID="cri-o://733f78ae577f592e537720480ceee881b358d0f4e498e222913a413efb9d743d" gracePeriod=10 Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.341838 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.372166 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config\") pod \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.372283 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqb6p\" (UniqueName: \"kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p\") pod \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.372334 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data\") pod \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\" (UID: \"f2831b28-9004-4323-a39f-0a43d7cbb6c0\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.373274 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data" (OuterVolumeSpecName: "config-data") pod "f2831b28-9004-4323-a39f-0a43d7cbb6c0" (UID: "f2831b28-9004-4323-a39f-0a43d7cbb6c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.373668 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f2831b28-9004-4323-a39f-0a43d7cbb6c0" (UID: "f2831b28-9004-4323-a39f-0a43d7cbb6c0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.384433 4772 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" secret="" err="secret \"galera-openstack-dockercfg-xlz4p\" not found" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.384475 4772 scope.go:117] "RemoveContainer" containerID="9656b6bcb329b3b5e78795747c229ab7fbed50b5340ece5752b913589aa71b06" Jan 24 04:01:21 crc kubenswrapper[4772]: E0124 04:01:21.384830 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone253c-account-delete-h5v7x_horizon-kuttl-tests(4f71f5ee-2df4-4622-82cd-1c3ac17d88b4)\"" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.384902 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p" (OuterVolumeSpecName: "kube-api-access-nqb6p") pod "f2831b28-9004-4323-a39f-0a43d7cbb6c0" (UID: "f2831b28-9004-4323-a39f-0a43d7cbb6c0"). InnerVolumeSpecName "kube-api-access-nqb6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.407956 4772 generic.go:334] "Generic (PLEG): container finished" podID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" containerID="733f78ae577f592e537720480ceee881b358d0f4e498e222913a413efb9d743d" exitCode=0 Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.408044 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" event={"ID":"fdc11ac5-1127-4ca6-b518-9476edcdaafb","Type":"ContainerDied","Data":"733f78ae577f592e537720480ceee881b358d0f4e498e222913a413efb9d743d"} Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.412202 4772 generic.go:334] "Generic (PLEG): container finished" podID="15e139ca-fa05-4701-b9d4-e4524f011e5d" containerID="5f1adc023cdb78f59cb5b997da7cd6e791c9ed2b6ee1bbc8e9c823d76db51282" exitCode=0 Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.412277 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" event={"ID":"15e139ca-fa05-4701-b9d4-e4524f011e5d","Type":"ContainerDied","Data":"5f1adc023cdb78f59cb5b997da7cd6e791c9ed2b6ee1bbc8e9c823d76db51282"} Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.413830 4772 generic.go:334] "Generic (PLEG): container finished" podID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" containerID="3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a" exitCode=0 Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.413863 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"f2831b28-9004-4323-a39f-0a43d7cbb6c0","Type":"ContainerDied","Data":"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a"} Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.413885 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"f2831b28-9004-4323-a39f-0a43d7cbb6c0","Type":"ContainerDied","Data":"037cf6c502ad0fd7a2a180c61607c181d4afa33118a141dea381537f62338f73"} Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.413904 4772 scope.go:117] "RemoveContainer" containerID="3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.414007 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.476143 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.476169 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqb6p\" (UniqueName: \"kubernetes.io/projected/f2831b28-9004-4323-a39f-0a43d7cbb6c0-kube-api-access-nqb6p\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.476184 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2831b28-9004-4323-a39f-0a43d7cbb6c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.483224 4772 scope.go:117] "RemoveContainer" containerID="3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a" Jan 24 04:01:21 crc kubenswrapper[4772]: E0124 04:01:21.486119 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a\": container with ID starting with 3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a not found: ID does not exist" containerID="3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.486160 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a"} err="failed to get container status \"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a\": rpc error: code = NotFound desc = could not find container \"3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a\": container with ID starting with 3eff46b7f8056bbce7e7ad677a4836653bec83c446eb3d0494636527b9e6376a not found: ID does not exist" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.510099 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.515009 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.579333 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.579533 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-index-j6cwp" podUID="771cb13d-6b19-45a2-b23d-68156056b344" containerName="registry-server" containerID="cri-o://27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc" gracePeriod=30 Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.610414 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.619746 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/080def0115b77cfa13350900d9fe2a3cff0c7f7cdd7ef766cb4d559cb99nhh8"] Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.668479 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a88642-bfab-4b37-adfb-f1f354cc070f" path="/var/lib/kubelet/pods/33a88642-bfab-4b37-adfb-f1f354cc070f/volumes" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.668824 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" path="/var/lib/kubelet/pods/f2831b28-9004-4323-a39f-0a43d7cbb6c0/volumes" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.669297 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f72e40f3-c149-4466-91fe-d21a07c221d8" path="/var/lib/kubelet/pods/f72e40f3-c149-4466-91fe-d21a07c221d8/volumes" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.669986 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" path="/var/lib/kubelet/pods/f7ee0cb0-31f8-40c4-baee-a20c07e97162/volumes" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.742705 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.748660 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883597 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmccv\" (UniqueName: \"kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv\") pod \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts\") pod \"15e139ca-fa05-4701-b9d4-e4524f011e5d\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883721 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data\") pod \"15e139ca-fa05-4701-b9d4-e4524f011e5d\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883785 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert\") pod \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883811 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6t4\" (UniqueName: \"kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4\") pod \"15e139ca-fa05-4701-b9d4-e4524f011e5d\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883861 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert\") pod \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\" (UID: \"fdc11ac5-1127-4ca6-b518-9476edcdaafb\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883888 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys\") pod \"15e139ca-fa05-4701-b9d4-e4524f011e5d\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.883998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys\") pod \"15e139ca-fa05-4701-b9d4-e4524f011e5d\" (UID: \"15e139ca-fa05-4701-b9d4-e4524f011e5d\") " Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.897597 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15e139ca-fa05-4701-b9d4-e4524f011e5d" (UID: "15e139ca-fa05-4701-b9d4-e4524f011e5d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.898612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "15e139ca-fa05-4701-b9d4-e4524f011e5d" (UID: "15e139ca-fa05-4701-b9d4-e4524f011e5d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.900632 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv" (OuterVolumeSpecName: "kube-api-access-pmccv") pod "fdc11ac5-1127-4ca6-b518-9476edcdaafb" (UID: "fdc11ac5-1127-4ca6-b518-9476edcdaafb"). InnerVolumeSpecName "kube-api-access-pmccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.901963 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4" (OuterVolumeSpecName: "kube-api-access-zt6t4") pod "15e139ca-fa05-4701-b9d4-e4524f011e5d" (UID: "15e139ca-fa05-4701-b9d4-e4524f011e5d"). InnerVolumeSpecName "kube-api-access-zt6t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.904029 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts" (OuterVolumeSpecName: "scripts") pod "15e139ca-fa05-4701-b9d4-e4524f011e5d" (UID: "15e139ca-fa05-4701-b9d4-e4524f011e5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.904176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fdc11ac5-1127-4ca6-b518-9476edcdaafb" (UID: "fdc11ac5-1127-4ca6-b518-9476edcdaafb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.905987 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "fdc11ac5-1127-4ca6-b518-9476edcdaafb" (UID: "fdc11ac5-1127-4ca6-b518-9476edcdaafb"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.922024 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data" (OuterVolumeSpecName: "config-data") pod "15e139ca-fa05-4701-b9d4-e4524f011e5d" (UID: "15e139ca-fa05-4701-b9d4-e4524f011e5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985561 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985598 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985610 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6t4\" (UniqueName: \"kubernetes.io/projected/15e139ca-fa05-4701-b9d4-e4524f011e5d-kube-api-access-zt6t4\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985619 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdc11ac5-1127-4ca6-b518-9476edcdaafb-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985627 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985635 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985643 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmccv\" (UniqueName: \"kubernetes.io/projected/fdc11ac5-1127-4ca6-b518-9476edcdaafb-kube-api-access-pmccv\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.985650 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e139ca-fa05-4701-b9d4-e4524f011e5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:21 crc kubenswrapper[4772]: I0124 04:01:21.997775 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.027461 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.086726 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86zfw\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.086786 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.086813 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.086928 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.086985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087011 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087068 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087138 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret\") pod \"05936f4c-b4df-4470-bff4-4ea5fee045ad\" (UID: \"05936f4c-b4df-4470-bff4-4ea5fee045ad\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087318 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087427 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087724 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05936f4c-b4df-4470-bff4-4ea5fee045ad-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.087787 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.088116 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.089994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw" (OuterVolumeSpecName: "kube-api-access-86zfw") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "kube-api-access-86zfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.090985 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info" (OuterVolumeSpecName: "pod-info") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.091039 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.097316 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7" (OuterVolumeSpecName: "persistence") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.144479 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "05936f4c-b4df-4470-bff4-4ea5fee045ad" (UID: "05936f4c-b4df-4470-bff4-4ea5fee045ad"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.188529 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w67pt\" (UniqueName: \"kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt\") pod \"771cb13d-6b19-45a2-b23d-68156056b344\" (UID: \"771cb13d-6b19-45a2-b23d-68156056b344\") " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.188935 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05936f4c-b4df-4470-bff4-4ea5fee045ad-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.188948 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86zfw\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-kube-api-access-86zfw\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.188983 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") on node \"crc\" " Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.188993 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05936f4c-b4df-4470-bff4-4ea5fee045ad-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.189002 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.189013 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05936f4c-b4df-4470-bff4-4ea5fee045ad-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.192617 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt" (OuterVolumeSpecName: "kube-api-access-w67pt") pod "771cb13d-6b19-45a2-b23d-68156056b344" (UID: "771cb13d-6b19-45a2-b23d-68156056b344"). InnerVolumeSpecName "kube-api-access-w67pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.201502 4772 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.201640 4772 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7") on node "crc" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.290909 4772 reconciler_common.go:293] "Volume detached for volume \"pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d22e9ec3-0edb-4fc8-ac82-cf69a6d234f7\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.290955 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w67pt\" (UniqueName: \"kubernetes.io/projected/771cb13d-6b19-45a2-b23d-68156056b344-kube-api-access-w67pt\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.424469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" event={"ID":"fdc11ac5-1127-4ca6-b518-9476edcdaafb","Type":"ContainerDied","Data":"4b90d71f7d242317822fd76e11424e21035876150b30ffa6091590293dedff98"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.424530 4772 scope.go:117] "RemoveContainer" containerID="733f78ae577f592e537720480ceee881b358d0f4e498e222913a413efb9d743d" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.424862 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.426806 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" event={"ID":"15e139ca-fa05-4701-b9d4-e4524f011e5d","Type":"ContainerDied","Data":"9fe0f075c62585faad7a4ae1a6303f2076ac9f7edc65da457881668cacf14d5d"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.426829 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-578d7f9b5f-s458x" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.435412 4772 generic.go:334] "Generic (PLEG): container finished" podID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerID="4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2" exitCode=0 Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.435505 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerDied","Data":"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.435539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"05936f4c-b4df-4470-bff4-4ea5fee045ad","Type":"ContainerDied","Data":"b9b0dabadf4474efc42404c18fa491c73d9d225f899c98f4c3f4098744c0b902"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.435638 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.446021 4772 generic.go:334] "Generic (PLEG): container finished" podID="771cb13d-6b19-45a2-b23d-68156056b344" containerID="27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc" exitCode=0 Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.446065 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-j6cwp" event={"ID":"771cb13d-6b19-45a2-b23d-68156056b344","Type":"ContainerDied","Data":"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.446094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-j6cwp" event={"ID":"771cb13d-6b19-45a2-b23d-68156056b344","Type":"ContainerDied","Data":"c38e7867497b2b6141995a3ff7c555e4c9ea8f36001806b4a59fdaef8e327f1c"} Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.446155 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-j6cwp" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.550448 4772 scope.go:117] "RemoveContainer" containerID="5f1adc023cdb78f59cb5b997da7cd6e791c9ed2b6ee1bbc8e9c823d76db51282" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.560574 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.570369 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-index-j6cwp"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.577254 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.590323 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-578d7f9b5f-s458x"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.599447 4772 scope.go:117] "RemoveContainer" containerID="4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.606273 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.613839 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.629320 4772 scope.go:117] "RemoveContainer" containerID="e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.629509 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.631965 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-847649c645-ww8rt"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.659317 4772 scope.go:117] "RemoveContainer" containerID="4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2" Jan 24 04:01:22 crc kubenswrapper[4772]: E0124 04:01:22.662302 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2\": container with ID starting with 4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2 not found: ID does not exist" containerID="4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.662353 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2"} err="failed to get container status \"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2\": rpc error: code = NotFound desc = could not find container \"4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2\": container with ID starting with 4ba3b349f66bf9b551bde58a2f408c7ec2fee9669e09cd406f893c7b20ffc4f2 not found: ID does not exist" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.662387 4772 scope.go:117] "RemoveContainer" containerID="e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb" Jan 24 04:01:22 crc kubenswrapper[4772]: E0124 04:01:22.662860 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb\": container with ID starting with e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb not found: ID does not exist" containerID="e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.662998 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb"} err="failed to get container status \"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb\": rpc error: code = NotFound desc = could not find container \"e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb\": container with ID starting with e2c9c0794dcbc3506923a9f697f0978386230692d3ff31dad2857a1e2cc995cb not found: ID does not exist" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.663110 4772 scope.go:117] "RemoveContainer" containerID="27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.679550 4772 scope.go:117] "RemoveContainer" containerID="27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc" Jan 24 04:01:22 crc kubenswrapper[4772]: E0124 04:01:22.680151 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc\": container with ID starting with 27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc not found: ID does not exist" containerID="27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.680181 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc"} err="failed to get container status \"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc\": rpc error: code = NotFound desc = could not find container \"27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc\": container with ID starting with 27b133280fa1ff197e76caee5b0a2e042d6989afec8df165d2147f2e86ea82fc not found: ID does not exist" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.882672 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-k6rrk"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.886966 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.888110 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-k6rrk"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.901290 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.906383 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-253c-account-create-update-ljrsq"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.914556 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:22 crc kubenswrapper[4772]: I0124 04:01:22.931024 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-0" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="galera" containerID="cri-o://659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" gracePeriod=26 Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013484 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h446z\" (UniqueName: \"kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013526 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013582 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013599 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.013621 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config\") pod \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\" (UID: \"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a\") " Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.013943 4772 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.013990 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts podName:4f71f5ee-2df4-4622-82cd-1c3ac17d88b4 nodeName:}" failed. No retries permitted until 2026-01-24 04:01:27.013976911 +0000 UTC m=+1184.051067626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts") pod "keystone253c-account-delete-h5v7x" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4") : configmap "openstack-scripts" not found Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.014771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.014850 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.015435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.015911 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.030288 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z" (OuterVolumeSpecName: "kube-api-access-h446z") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "kube-api-access-h446z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.047398 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" (UID: "3d2f9a52-48db-4361-b4d5-7f9ea2d6678a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.062972 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.063165 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" podUID="1782b672-2a10-4ea2-a88a-c586d992d130" containerName="manager" containerID="cri-o://a5dcdaa4c3f700c48d725ed851adb9ff87b3b0a4d26ac5c47143f8c3d1d6eeb4" gracePeriod=10 Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.115698 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.116113 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h446z\" (UniqueName: \"kubernetes.io/projected/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kube-api-access-h446z\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.116124 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.116145 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.116156 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.116165 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.149218 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.217283 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.263699 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.319133 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts\") pod \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.319226 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js25q\" (UniqueName: \"kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q\") pod \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\" (UID: \"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.320161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.325317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q" (OuterVolumeSpecName: "kube-api-access-js25q") pod "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" (UID: "4f71f5ee-2df4-4622-82cd-1c3ac17d88b4"). InnerVolumeSpecName "kube-api-access-js25q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.369850 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.370119 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-vpgcd" podUID="73bc974d-1516-41a1-afc8-588127374117" containerName="registry-server" containerID="cri-o://26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804" gracePeriod=30 Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.406665 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.411367 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069cwr4n"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.422589 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.422618 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js25q\" (UniqueName: \"kubernetes.io/projected/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4-kube-api-access-js25q\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.428314 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 is running failed: container process not found" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.428678 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 is running failed: container process not found" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.429109 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 is running failed: container process not found" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.429188 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 is running failed: container process not found" probeType="Readiness" pod="horizon-kuttl-tests/openstack-galera-0" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="galera" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.463813 4772 generic.go:334] "Generic (PLEG): container finished" podID="1782b672-2a10-4ea2-a88a-c586d992d130" containerID="a5dcdaa4c3f700c48d725ed851adb9ff87b3b0a4d26ac5c47143f8c3d1d6eeb4" exitCode=0 Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.463859 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" event={"ID":"1782b672-2a10-4ea2-a88a-c586d992d130","Type":"ContainerDied","Data":"a5dcdaa4c3f700c48d725ed851adb9ff87b3b0a4d26ac5c47143f8c3d1d6eeb4"} Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.466186 4772 generic.go:334] "Generic (PLEG): container finished" podID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerID="e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b" exitCode=0 Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.466299 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.466972 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerDied","Data":"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b"} Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.467001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"3d2f9a52-48db-4361-b4d5-7f9ea2d6678a","Type":"ContainerDied","Data":"5eaef4de0bb701457801f4c5b647f5a49a645c3b8b5e91e5032783615cd01980"} Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.467017 4772 scope.go:117] "RemoveContainer" containerID="e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.485237 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" event={"ID":"4f71f5ee-2df4-4622-82cd-1c3ac17d88b4","Type":"ContainerDied","Data":"96ff030422dcd15b6b9828b8a23c881cfff6acafecaeb2dfb96093dbcc6d3c2b"} Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.485323 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone253c-account-delete-h5v7x" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.569320 4772 scope.go:117] "RemoveContainer" containerID="8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.647830 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.650697 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.660016 4772 scope.go:117] "RemoveContainer" containerID="e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b" Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.660478 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b\": container with ID starting with e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b not found: ID does not exist" containerID="e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.660530 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b"} err="failed to get container status \"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b\": rpc error: code = NotFound desc = could not find container \"e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b\": container with ID starting with e1f6a463b019b86eb61b30bb134ff57e8d06bb861e680c43a30d218e30e7f01b not found: ID does not exist" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.660554 4772 scope.go:117] "RemoveContainer" containerID="8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9" Jan 24 04:01:23 crc kubenswrapper[4772]: E0124 04:01:23.660911 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9\": container with ID starting with 8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9 not found: ID does not exist" containerID="8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.660947 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9"} err="failed to get container status \"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9\": rpc error: code = NotFound desc = could not find container \"8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9\": container with ID starting with 8bcd14894105e3e6f078471a54176ba160631186cc68663cfe5745655934c7d9 not found: ID does not exist" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.660971 4772 scope.go:117] "RemoveContainer" containerID="9656b6bcb329b3b5e78795747c229ab7fbed50b5340ece5752b913589aa71b06" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.682451 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" path="/var/lib/kubelet/pods/05936f4c-b4df-4470-bff4-4ea5fee045ad/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.683046 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e139ca-fa05-4701-b9d4-e4524f011e5d" path="/var/lib/kubelet/pods/15e139ca-fa05-4701-b9d4-e4524f011e5d/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.684000 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39506910-41ee-49b0-99f6-f11c28930385" path="/var/lib/kubelet/pods/39506910-41ee-49b0-99f6-f11c28930385/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.684592 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771cb13d-6b19-45a2-b23d-68156056b344" path="/var/lib/kubelet/pods/771cb13d-6b19-45a2-b23d-68156056b344/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.685498 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b913a37a-112f-484d-b135-9bed042886f9" path="/var/lib/kubelet/pods/b913a37a-112f-484d-b135-9bed042886f9/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.686408 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec53d439-5537-4600-a36e-175873ed7f38" path="/var/lib/kubelet/pods/ec53d439-5537-4600-a36e-175873ed7f38/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.686886 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" path="/var/lib/kubelet/pods/fdc11ac5-1127-4ca6-b518-9476edcdaafb/volumes" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.689848 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.689888 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.689901 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone253c-account-delete-h5v7x"] Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.703683 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727551 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727606 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727673 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727705 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727762 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert\") pod \"1782b672-2a10-4ea2-a88a-c586d992d130\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727796 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727823 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert\") pod \"1782b672-2a10-4ea2-a88a-c586d992d130\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl6n5\" (UniqueName: \"kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5\") pod \"1782b672-2a10-4ea2-a88a-c586d992d130\" (UID: \"1782b672-2a10-4ea2-a88a-c586d992d130\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.727898 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tzwx\" (UniqueName: \"kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx\") pod \"371de15b-9f9b-445c-afa1-eea50501d846\" (UID: \"371de15b-9f9b-445c-afa1-eea50501d846\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.732100 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx" (OuterVolumeSpecName: "kube-api-access-4tzwx") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "kube-api-access-4tzwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.732967 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.734027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.737412 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.738251 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.738561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1782b672-2a10-4ea2-a88a-c586d992d130" (UID: "1782b672-2a10-4ea2-a88a-c586d992d130"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.738583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "1782b672-2a10-4ea2-a88a-c586d992d130" (UID: "1782b672-2a10-4ea2-a88a-c586d992d130"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.739043 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5" (OuterVolumeSpecName: "kube-api-access-pl6n5") pod "1782b672-2a10-4ea2-a88a-c586d992d130" (UID: "1782b672-2a10-4ea2-a88a-c586d992d130"). InnerVolumeSpecName "kube-api-access-pl6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.741321 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "371de15b-9f9b-445c-afa1-eea50501d846" (UID: "371de15b-9f9b-445c-afa1-eea50501d846"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.771488 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829128 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829160 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829170 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829179 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/371de15b-9f9b-445c-afa1-eea50501d846-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829189 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1782b672-2a10-4ea2-a88a-c586d992d130-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829199 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl6n5\" (UniqueName: \"kubernetes.io/projected/1782b672-2a10-4ea2-a88a-c586d992d130-kube-api-access-pl6n5\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829208 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tzwx\" (UniqueName: \"kubernetes.io/projected/371de15b-9f9b-445c-afa1-eea50501d846-kube-api-access-4tzwx\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829221 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371de15b-9f9b-445c-afa1-eea50501d846-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.829254 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.843104 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.930622 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp8cg\" (UniqueName: \"kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg\") pod \"73bc974d-1516-41a1-afc8-588127374117\" (UID: \"73bc974d-1516-41a1-afc8-588127374117\") " Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.930913 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:23 crc kubenswrapper[4772]: I0124 04:01:23.933557 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg" (OuterVolumeSpecName: "kube-api-access-qp8cg") pod "73bc974d-1516-41a1-afc8-588127374117" (UID: "73bc974d-1516-41a1-afc8-588127374117"). InnerVolumeSpecName "kube-api-access-qp8cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.031801 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp8cg\" (UniqueName: \"kubernetes.io/projected/73bc974d-1516-41a1-afc8-588127374117-kube-api-access-qp8cg\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.511775 4772 generic.go:334] "Generic (PLEG): container finished" podID="73bc974d-1516-41a1-afc8-588127374117" containerID="26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804" exitCode=0 Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.511824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vpgcd" event={"ID":"73bc974d-1516-41a1-afc8-588127374117","Type":"ContainerDied","Data":"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804"} Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.512141 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vpgcd" event={"ID":"73bc974d-1516-41a1-afc8-588127374117","Type":"ContainerDied","Data":"04349d909a0c75429eb42fc2dcdb9edb0c4de247b0f50b8ecca4635ae61bc7fb"} Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.512163 4772 scope.go:117] "RemoveContainer" containerID="26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.511855 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vpgcd" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.514495 4772 generic.go:334] "Generic (PLEG): container finished" podID="371de15b-9f9b-445c-afa1-eea50501d846" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" exitCode=0 Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.514565 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerDied","Data":"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852"} Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.514597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"371de15b-9f9b-445c-afa1-eea50501d846","Type":"ContainerDied","Data":"d0f1a9d36757627fc9dd235a0490ee993ae77e9bbc2d70312f6b6b713f0c4f22"} Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.514654 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.520393 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" event={"ID":"1782b672-2a10-4ea2-a88a-c586d992d130","Type":"ContainerDied","Data":"1bd5283e108b01b45883d61f222e2580c0bce66eb4db23506ab4fe2e9d9d93b9"} Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.520407 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.534347 4772 scope.go:117] "RemoveContainer" containerID="26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804" Jan 24 04:01:24 crc kubenswrapper[4772]: E0124 04:01:24.534802 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804\": container with ID starting with 26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804 not found: ID does not exist" containerID="26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.535677 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804"} err="failed to get container status \"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804\": rpc error: code = NotFound desc = could not find container \"26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804\": container with ID starting with 26859c0f942d88b8157e75111d269fba8a60c7121d6cb72dacca58446abda804 not found: ID does not exist" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.535791 4772 scope.go:117] "RemoveContainer" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.555232 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.563868 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-vpgcd"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.570121 4772 scope.go:117] "RemoveContainer" containerID="7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.571312 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.579395 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.584721 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.588804 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-76c887549b-6rhr6"] Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.606854 4772 scope.go:117] "RemoveContainer" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" Jan 24 04:01:24 crc kubenswrapper[4772]: E0124 04:01:24.607359 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852\": container with ID starting with 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 not found: ID does not exist" containerID="659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.607412 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852"} err="failed to get container status \"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852\": rpc error: code = NotFound desc = could not find container \"659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852\": container with ID starting with 659fe812d25a27f03f7b5e5fc55a30d94db0ea491be4c6fafca999fbaeeb1852 not found: ID does not exist" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.607447 4772 scope.go:117] "RemoveContainer" containerID="7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529" Jan 24 04:01:24 crc kubenswrapper[4772]: E0124 04:01:24.607799 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529\": container with ID starting with 7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529 not found: ID does not exist" containerID="7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.607866 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529"} err="failed to get container status \"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529\": rpc error: code = NotFound desc = could not find container \"7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529\": container with ID starting with 7debd4092df60aab2ecf08c3b191190d7b39e10f5a33265a9b608e087ccf1529 not found: ID does not exist" Jan 24 04:01:24 crc kubenswrapper[4772]: I0124 04:01:24.607892 4772 scope.go:117] "RemoveContainer" containerID="a5dcdaa4c3f700c48d725ed851adb9ff87b3b0a4d26ac5c47143f8c3d1d6eeb4" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.486061 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.486302 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" podUID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" containerName="operator" containerID="cri-o://e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a" gracePeriod=10 Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.702155 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1782b672-2a10-4ea2-a88a-c586d992d130" path="/var/lib/kubelet/pods/1782b672-2a10-4ea2-a88a-c586d992d130/volumes" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.703127 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371de15b-9f9b-445c-afa1-eea50501d846" path="/var/lib/kubelet/pods/371de15b-9f9b-445c-afa1-eea50501d846/volumes" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.703712 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" path="/var/lib/kubelet/pods/3d2f9a52-48db-4361-b4d5-7f9ea2d6678a/volumes" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.704782 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" path="/var/lib/kubelet/pods/4f71f5ee-2df4-4622-82cd-1c3ac17d88b4/volumes" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.705196 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73bc974d-1516-41a1-afc8-588127374117" path="/var/lib/kubelet/pods/73bc974d-1516-41a1-afc8-588127374117/volumes" Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.833147 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.833975 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" podUID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" containerName="registry-server" containerID="cri-o://630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803" gracePeriod=30 Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.862185 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8"] Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.869105 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590dkfb8"] Jan 24 04:01:25 crc kubenswrapper[4772]: I0124 04:01:25.972778 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.168554 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d7tp\" (UniqueName: \"kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp\") pod \"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3\" (UID: \"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3\") " Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.174457 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp" (OuterVolumeSpecName: "kube-api-access-5d7tp") pod "fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" (UID: "fb0c9e11-a0ff-4748-a8ce-aeb96074bae3"). InnerVolumeSpecName "kube-api-access-5d7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.207269 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.270429 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d7tp\" (UniqueName: \"kubernetes.io/projected/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3-kube-api-access-5d7tp\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.371600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chfl8\" (UniqueName: \"kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8\") pod \"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9\" (UID: \"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9\") " Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.374771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8" (OuterVolumeSpecName: "kube-api-access-chfl8") pod "8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" (UID: "8bf7865a-3937-4a88-b4bc-b5f9a36f26f9"). InnerVolumeSpecName "kube-api-access-chfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.473057 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chfl8\" (UniqueName: \"kubernetes.io/projected/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9-kube-api-access-chfl8\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.542227 4772 generic.go:334] "Generic (PLEG): container finished" podID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" containerID="e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a" exitCode=0 Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.542271 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" event={"ID":"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3","Type":"ContainerDied","Data":"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a"} Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.543520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" event={"ID":"fb0c9e11-a0ff-4748-a8ce-aeb96074bae3","Type":"ContainerDied","Data":"4898a43cdafceafd1b4eed4330f50fed913152326c452bd156977dfcbe4d7789"} Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.542313 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.543606 4772 scope.go:117] "RemoveContainer" containerID="e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.545831 4772 generic.go:334] "Generic (PLEG): container finished" podID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" containerID="630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803" exitCode=0 Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.545893 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.545962 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" event={"ID":"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9","Type":"ContainerDied","Data":"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803"} Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.546013 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-tbqs5" event={"ID":"8bf7865a-3937-4a88-b4bc-b5f9a36f26f9","Type":"ContainerDied","Data":"40d4599e3c44d7719a18a256af159230b688eeb546bff36c91ae1d0f044b3657"} Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.573662 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.574178 4772 scope.go:117] "RemoveContainer" containerID="e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a" Jan 24 04:01:26 crc kubenswrapper[4772]: E0124 04:01:26.574949 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a\": container with ID starting with e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a not found: ID does not exist" containerID="e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.575020 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a"} err="failed to get container status \"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a\": rpc error: code = NotFound desc = could not find container \"e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a\": container with ID starting with e9fd13d87375b1beb3d632f551a1d59c6a89af76055d440b2f22f328bba5f93a not found: ID does not exist" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.575062 4772 scope.go:117] "RemoveContainer" containerID="630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.585174 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dflxx"] Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.596712 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.597812 4772 scope.go:117] "RemoveContainer" containerID="630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803" Jan 24 04:01:26 crc kubenswrapper[4772]: E0124 04:01:26.598815 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803\": container with ID starting with 630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803 not found: ID does not exist" containerID="630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.598857 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803"} err="failed to get container status \"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803\": rpc error: code = NotFound desc = could not find container \"630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803\": container with ID starting with 630a2b3aa4bf5db0840316826a3feb13fd543dd096ddd745fb32b28943124803 not found: ID does not exist" Jan 24 04:01:26 crc kubenswrapper[4772]: I0124 04:01:26.602337 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-tbqs5"] Jan 24 04:01:27 crc kubenswrapper[4772]: I0124 04:01:27.671673 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7" path="/var/lib/kubelet/pods/5b9a4e42-f27d-4eb8-8ac1-2c4111b65dc7/volumes" Jan 24 04:01:27 crc kubenswrapper[4772]: I0124 04:01:27.672928 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" path="/var/lib/kubelet/pods/8bf7865a-3937-4a88-b4bc-b5f9a36f26f9/volumes" Jan 24 04:01:27 crc kubenswrapper[4772]: I0124 04:01:27.673559 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" path="/var/lib/kubelet/pods/fb0c9e11-a0ff-4748-a8ce-aeb96074bae3/volumes" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.080716 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.081561 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" podUID="add30b7e-5452-48b0-9d24-9ac6dba05f43" containerName="manager" containerID="cri-o://ecb2dc2d8df2a2d975ccdb996da9e2b1056856c51e5313ade94d2f56df741b6c" gracePeriod=10 Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.476416 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.476742 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-fgqf6" podUID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" containerName="registry-server" containerID="cri-o://532cac6a7bc8c117fc977167e447595e155c4b6cb04b9252deb628f76e0ef987" gracePeriod=30 Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.505878 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv"] Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.507815 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/5fa8967dd1d3d931d6ad5c29434db43714024d2fd37d01754631119ea8qdglv"] Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.622847 4772 generic.go:334] "Generic (PLEG): container finished" podID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" containerID="532cac6a7bc8c117fc977167e447595e155c4b6cb04b9252deb628f76e0ef987" exitCode=0 Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.623234 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fgqf6" event={"ID":"5b8ea70a-6c94-4ff6-878b-9e932a7251a5","Type":"ContainerDied","Data":"532cac6a7bc8c117fc977167e447595e155c4b6cb04b9252deb628f76e0ef987"} Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.626205 4772 generic.go:334] "Generic (PLEG): container finished" podID="add30b7e-5452-48b0-9d24-9ac6dba05f43" containerID="ecb2dc2d8df2a2d975ccdb996da9e2b1056856c51e5313ade94d2f56df741b6c" exitCode=0 Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.626239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" event={"ID":"add30b7e-5452-48b0-9d24-9ac6dba05f43","Type":"ContainerDied","Data":"ecb2dc2d8df2a2d975ccdb996da9e2b1056856c51e5313ade94d2f56df741b6c"} Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.626257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" event={"ID":"add30b7e-5452-48b0-9d24-9ac6dba05f43","Type":"ContainerDied","Data":"ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e"} Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.626269 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd9793c7ad7a868a70b7bd8e1a014e4b5b7c32a24255362ae0671881fab017e" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.652078 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.666867 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950be24a-370e-4a36-9f60-3342e339e1c6" path="/var/lib/kubelet/pods/950be24a-370e-4a36-9f60-3342e339e1c6/volumes" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.781031 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-795ng\" (UniqueName: \"kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng\") pod \"add30b7e-5452-48b0-9d24-9ac6dba05f43\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.781092 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert\") pod \"add30b7e-5452-48b0-9d24-9ac6dba05f43\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.781159 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert\") pod \"add30b7e-5452-48b0-9d24-9ac6dba05f43\" (UID: \"add30b7e-5452-48b0-9d24-9ac6dba05f43\") " Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.787967 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng" (OuterVolumeSpecName: "kube-api-access-795ng") pod "add30b7e-5452-48b0-9d24-9ac6dba05f43" (UID: "add30b7e-5452-48b0-9d24-9ac6dba05f43"). InnerVolumeSpecName "kube-api-access-795ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.789813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "add30b7e-5452-48b0-9d24-9ac6dba05f43" (UID: "add30b7e-5452-48b0-9d24-9ac6dba05f43"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.800816 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "add30b7e-5452-48b0-9d24-9ac6dba05f43" (UID: "add30b7e-5452-48b0-9d24-9ac6dba05f43"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.828287 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.885179 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5nfs\" (UniqueName: \"kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs\") pod \"5b8ea70a-6c94-4ff6-878b-9e932a7251a5\" (UID: \"5b8ea70a-6c94-4ff6-878b-9e932a7251a5\") " Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.885448 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-795ng\" (UniqueName: \"kubernetes.io/projected/add30b7e-5452-48b0-9d24-9ac6dba05f43-kube-api-access-795ng\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.885462 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.885473 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/add30b7e-5452-48b0-9d24-9ac6dba05f43-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.887751 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs" (OuterVolumeSpecName: "kube-api-access-n5nfs") pod "5b8ea70a-6c94-4ff6-878b-9e932a7251a5" (UID: "5b8ea70a-6c94-4ff6-878b-9e932a7251a5"). InnerVolumeSpecName "kube-api-access-n5nfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:33 crc kubenswrapper[4772]: I0124 04:01:33.986714 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5nfs\" (UniqueName: \"kubernetes.io/projected/5b8ea70a-6c94-4ff6-878b-9e932a7251a5-kube-api-access-n5nfs\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.633883 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx" Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.633941 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-fgqf6" Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.633878 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-fgqf6" event={"ID":"5b8ea70a-6c94-4ff6-878b-9e932a7251a5","Type":"ContainerDied","Data":"f4f5aed054d1635251d4918745b91055973bdd876902e95636d5b5ed1c118442"} Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.634106 4772 scope.go:117] "RemoveContainer" containerID="532cac6a7bc8c117fc977167e447595e155c4b6cb04b9252deb628f76e0ef987" Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.665515 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.671020 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-568c7bc546-wh4qx"] Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.679494 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.683448 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-fgqf6"] Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.943071 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 04:01:34 crc kubenswrapper[4772]: I0124 04:01:34.943301 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" podUID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" containerName="manager" containerID="cri-o://091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c" gracePeriod=10 Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.291690 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.295132 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-svwvf" podUID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" containerName="registry-server" containerID="cri-o://03e491b4775f254e16a95633cf991c8c272d23ccb1ced24816b76806cbf27fc5" gracePeriod=30 Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.315910 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5"] Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.320718 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b7cxm5"] Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.387278 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.405363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert\") pod \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.405404 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert\") pod \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.405521 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct25p\" (UniqueName: \"kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p\") pod \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\" (UID: \"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0\") " Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.409333 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p" (OuterVolumeSpecName: "kube-api-access-ct25p") pod "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" (UID: "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0"). InnerVolumeSpecName "kube-api-access-ct25p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.409655 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" (UID: "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.410258 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" (UID: "4fb52c71-52b6-47ab-9035-8fa8ac77a8b0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.507284 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.507346 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.507365 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct25p\" (UniqueName: \"kubernetes.io/projected/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0-kube-api-access-ct25p\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.644797 4772 generic.go:334] "Generic (PLEG): container finished" podID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" containerID="091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c" exitCode=0 Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.644850 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" event={"ID":"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0","Type":"ContainerDied","Data":"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c"} Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.644874 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" event={"ID":"4fb52c71-52b6-47ab-9035-8fa8ac77a8b0","Type":"ContainerDied","Data":"ff0a4f2debe80ecb6faaa89c818681af8eee704220113d9dd015a80feb898c0d"} Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.644891 4772 scope.go:117] "RemoveContainer" containerID="091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.644968 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.646853 4772 generic.go:334] "Generic (PLEG): container finished" podID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" containerID="03e491b4775f254e16a95633cf991c8c272d23ccb1ced24816b76806cbf27fc5" exitCode=0 Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.646906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-svwvf" event={"ID":"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5","Type":"ContainerDied","Data":"03e491b4775f254e16a95633cf991c8c272d23ccb1ced24816b76806cbf27fc5"} Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.672813 4772 scope.go:117] "RemoveContainer" containerID="091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c" Jan 24 04:01:35 crc kubenswrapper[4772]: E0124 04:01:35.673415 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c\": container with ID starting with 091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c not found: ID does not exist" containerID="091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.673445 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c"} err="failed to get container status \"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c\": rpc error: code = NotFound desc = could not find container \"091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c\": container with ID starting with 091027dbe3414f98e4f3009d765b2716293014355dfccec02789ebca1154c47c not found: ID does not exist" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.675131 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ebaf797-e602-47ea-a5e4-24fd2f2ecf01" path="/var/lib/kubelet/pods/3ebaf797-e602-47ea-a5e4-24fd2f2ecf01/volumes" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.676597 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" path="/var/lib/kubelet/pods/5b8ea70a-6c94-4ff6-878b-9e932a7251a5/volumes" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.677657 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add30b7e-5452-48b0-9d24-9ac6dba05f43" path="/var/lib/kubelet/pods/add30b7e-5452-48b0-9d24-9ac6dba05f43/volumes" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.678464 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.678520 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-55c49c975d-nch2v"] Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.756739 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.810435 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xktbv\" (UniqueName: \"kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv\") pod \"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5\" (UID: \"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5\") " Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.814234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv" (OuterVolumeSpecName: "kube-api-access-xktbv") pod "1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" (UID: "1f0eff61-87e2-4fe3-82af-e42e31fbe2e5"). InnerVolumeSpecName "kube-api-access-xktbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:01:35 crc kubenswrapper[4772]: I0124 04:01:35.911272 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xktbv\" (UniqueName: \"kubernetes.io/projected/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5-kube-api-access-xktbv\") on node \"crc\" DevicePath \"\"" Jan 24 04:01:36 crc kubenswrapper[4772]: I0124 04:01:36.659393 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-svwvf" event={"ID":"1f0eff61-87e2-4fe3-82af-e42e31fbe2e5","Type":"ContainerDied","Data":"9d16348e6f86c1151d231ecaff3ee5d2e1e5fc0a448445e82f8e4bc3ef2ea5ec"} Jan 24 04:01:36 crc kubenswrapper[4772]: I0124 04:01:36.659461 4772 scope.go:117] "RemoveContainer" containerID="03e491b4775f254e16a95633cf991c8c272d23ccb1ced24816b76806cbf27fc5" Jan 24 04:01:36 crc kubenswrapper[4772]: I0124 04:01:36.659481 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-svwvf" Jan 24 04:01:36 crc kubenswrapper[4772]: I0124 04:01:36.701580 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 04:01:36 crc kubenswrapper[4772]: I0124 04:01:36.712497 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-svwvf"] Jan 24 04:01:37 crc kubenswrapper[4772]: I0124 04:01:37.677949 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" path="/var/lib/kubelet/pods/1f0eff61-87e2-4fe3-82af-e42e31fbe2e5/volumes" Jan 24 04:01:37 crc kubenswrapper[4772]: I0124 04:01:37.678533 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" path="/var/lib/kubelet/pods/4fb52c71-52b6-47ab-9035-8fa8ac77a8b0/volumes" Jan 24 04:01:46 crc kubenswrapper[4772]: I0124 04:01:46.900055 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:01:46 crc kubenswrapper[4772]: I0124 04:01:46.901280 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:01:48 crc kubenswrapper[4772]: I0124 04:01:48.012480 4772 scope.go:117] "RemoveContainer" containerID="c483479fe58315314f7b596ae6eeb6ebfabed30bd05f74ab4cf93a16f45b4b77" Jan 24 04:01:48 crc kubenswrapper[4772]: I0124 04:01:48.044984 4772 scope.go:117] "RemoveContainer" containerID="cdc8e806f759ca65948eab0dcd9ee9a0de936b5e8a5b048cbd6b44502a5f9556" Jan 24 04:01:48 crc kubenswrapper[4772]: I0124 04:01:48.085304 4772 scope.go:117] "RemoveContainer" containerID="7e1e658d856b2373560d6c31444ed5489296e4a20aae4a797c8e832319e7b834" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682143 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wkmfc/must-gather-s4sds"] Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682770 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682788 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682806 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682816 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682831 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73bc974d-1516-41a1-afc8-588127374117" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682841 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="73bc974d-1516-41a1-afc8-588127374117" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682852 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682862 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682878 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682886 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682899 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682907 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="mysql-bootstrap" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682916 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682924 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682936 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682944 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682953 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682961 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682973 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771cb13d-6b19-45a2-b23d-68156056b344" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.682981 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="771cb13d-6b19-45a2-b23d-68156056b344" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.682995 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add30b7e-5452-48b0-9d24-9ac6dba05f43" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683005 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="add30b7e-5452-48b0-9d24-9ac6dba05f43" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683014 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="setup-container" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683021 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="setup-container" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683032 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" containerName="operator" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683040 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" containerName="operator" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683053 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1782b672-2a10-4ea2-a88a-c586d992d130" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683060 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1782b672-2a10-4ea2-a88a-c586d992d130" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683072 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683079 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683090 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e139ca-fa05-4701-b9d4-e4524f011e5d" containerName="keystone-api" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683097 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e139ca-fa05-4701-b9d4-e4524f011e5d" containerName="keystone-api" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683106 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683114 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683122 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683130 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683139 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="rabbitmq" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683147 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="rabbitmq" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683156 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683163 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683175 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" containerName="memcached" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683183 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" containerName="memcached" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683299 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="771cb13d-6b19-45a2-b23d-68156056b344" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683313 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0eff61-87e2-4fe3-82af-e42e31fbe2e5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683323 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683334 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2831b28-9004-4323-a39f-0a43d7cbb6c0" containerName="memcached" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683345 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683355 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb52c71-52b6-47ab-9035-8fa8ac77a8b0" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683365 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1782b672-2a10-4ea2-a88a-c586d992d130" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683374 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0c9e11-a0ff-4748-a8ce-aeb96074bae3" containerName="operator" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683385 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e139ca-fa05-4701-b9d4-e4524f011e5d" containerName="keystone-api" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683397 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2f9a52-48db-4361-b4d5-7f9ea2d6678a" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683406 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="05936f4c-b4df-4470-bff4-4ea5fee045ad" containerName="rabbitmq" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683418 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc11ac5-1127-4ca6-b518-9476edcdaafb" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683426 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8ea70a-6c94-4ff6-878b-9e932a7251a5" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683436 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="73bc974d-1516-41a1-afc8-588127374117" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683447 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ee0cb0-31f8-40c4-baee-a20c07e97162" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683458 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="add30b7e-5452-48b0-9d24-9ac6dba05f43" containerName="manager" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683468 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf7865a-3937-4a88-b4bc-b5f9a36f26f9" containerName="registry-server" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683479 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="371de15b-9f9b-445c-afa1-eea50501d846" containerName="galera" Jan 24 04:01:49 crc kubenswrapper[4772]: E0124 04:01:49.683597 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.683606 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f71f5ee-2df4-4622-82cd-1c3ac17d88b4" containerName="mariadb-account-delete" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.684304 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.690141 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wkmfc"/"kube-root-ca.crt" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.690189 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wkmfc"/"openshift-service-ca.crt" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.707553 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkmfc/must-gather-s4sds"] Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.753000 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.753095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sl6d\" (UniqueName: \"kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.854481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.854539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl6d\" (UniqueName: \"kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.854965 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:49 crc kubenswrapper[4772]: I0124 04:01:49.876860 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl6d\" (UniqueName: \"kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d\") pod \"must-gather-s4sds\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:50 crc kubenswrapper[4772]: I0124 04:01:50.008998 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:01:50 crc kubenswrapper[4772]: I0124 04:01:50.233387 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkmfc/must-gather-s4sds"] Jan 24 04:01:50 crc kubenswrapper[4772]: I0124 04:01:50.808241 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkmfc/must-gather-s4sds" event={"ID":"717cef35-4d54-45f6-8fe2-20a0260eca6a","Type":"ContainerStarted","Data":"2cc7f9399041099679b3b8bf32ea6587525c188536431b323bfdc6abcab432ff"} Jan 24 04:01:56 crc kubenswrapper[4772]: I0124 04:01:56.850950 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkmfc/must-gather-s4sds" event={"ID":"717cef35-4d54-45f6-8fe2-20a0260eca6a","Type":"ContainerStarted","Data":"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13"} Jan 24 04:01:57 crc kubenswrapper[4772]: I0124 04:01:57.860600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkmfc/must-gather-s4sds" event={"ID":"717cef35-4d54-45f6-8fe2-20a0260eca6a","Type":"ContainerStarted","Data":"d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048"} Jan 24 04:01:57 crc kubenswrapper[4772]: I0124 04:01:57.879540 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wkmfc/must-gather-s4sds" podStartSLOduration=2.700802613 podStartE2EDuration="8.879517907s" podCreationTimestamp="2026-01-24 04:01:49 +0000 UTC" firstStartedPulling="2026-01-24 04:01:50.240098653 +0000 UTC m=+1207.277189388" lastFinishedPulling="2026-01-24 04:01:56.418813917 +0000 UTC m=+1213.455904682" observedRunningTime="2026-01-24 04:01:57.877363107 +0000 UTC m=+1214.914453842" watchObservedRunningTime="2026-01-24 04:01:57.879517907 +0000 UTC m=+1214.916608642" Jan 24 04:02:16 crc kubenswrapper[4772]: I0124 04:02:16.900505 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:02:16 crc kubenswrapper[4772]: I0124 04:02:16.901335 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:02:46 crc kubenswrapper[4772]: I0124 04:02:46.899646 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:02:46 crc kubenswrapper[4772]: I0124 04:02:46.900479 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:02:46 crc kubenswrapper[4772]: I0124 04:02:46.900550 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 04:02:46 crc kubenswrapper[4772]: I0124 04:02:46.901416 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 04:02:46 crc kubenswrapper[4772]: I0124 04:02:46.901501 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8" gracePeriod=600 Jan 24 04:02:47 crc kubenswrapper[4772]: I0124 04:02:47.207055 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8" exitCode=0 Jan 24 04:02:47 crc kubenswrapper[4772]: I0124 04:02:47.207130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8"} Jan 24 04:02:47 crc kubenswrapper[4772]: I0124 04:02:47.207480 4772 scope.go:117] "RemoveContainer" containerID="2e6f02f79a8cefb2a7d2c04486f279ad1f52ca159fe2ff0308256f0e25cae45d" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.215248 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207"} Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.411225 4772 scope.go:117] "RemoveContainer" containerID="bcf5b0511ebdc9c2a965bb7b3acbd316497221138cb98647681e56d6af033f10" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.431214 4772 scope.go:117] "RemoveContainer" containerID="ecb2dc2d8df2a2d975ccdb996da9e2b1056856c51e5313ade94d2f56df741b6c" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.448860 4772 scope.go:117] "RemoveContainer" containerID="11630e9af61f3b886d721f8a4f50af700ddb3cea53ef4e843beef0eefdd613dc" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.477838 4772 scope.go:117] "RemoveContainer" containerID="ad1b4cfb6c39f8ad5d43d3450f958e07049dc0e1015d61c53f8fe225011492e3" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.501011 4772 scope.go:117] "RemoveContainer" containerID="1a3f58a1f79febf026825b7ab2c7b9bce436517c22efb037662168685743cfed" Jan 24 04:02:48 crc kubenswrapper[4772]: I0124 04:02:48.854322 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f88sl_e9bc8517-d223-42cf-9c6d-5e96dbc58e27/control-plane-machine-set-operator/0.log" Jan 24 04:02:49 crc kubenswrapper[4772]: I0124 04:02:49.061750 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lpsdw_62148cbe-9135-4627-8b05-05f8f4465d20/machine-api-operator/0.log" Jan 24 04:02:49 crc kubenswrapper[4772]: I0124 04:02:49.079590 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lpsdw_62148cbe-9135-4627-8b05-05f8f4465d20/kube-rbac-proxy/0.log" Jan 24 04:03:18 crc kubenswrapper[4772]: I0124 04:03:18.865612 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8skfz_af65200d-34f2-478a-9c7b-48c6eb982b11/kube-rbac-proxy/0.log" Jan 24 04:03:18 crc kubenswrapper[4772]: I0124 04:03:18.898424 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8skfz_af65200d-34f2-478a-9c7b-48c6eb982b11/controller/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.008764 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vs95l_57fd94bd-5a17-4006-8495-fb020332ba40/frr-k8s-webhook-server/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.097560 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.223223 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.250647 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.283443 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.289845 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.482772 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.520728 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.528426 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.529778 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.690039 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.714839 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/controller/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.720560 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.734840 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.909634 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/frr-metrics/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.924251 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/kube-rbac-proxy/0.log" Jan 24 04:03:19 crc kubenswrapper[4772]: I0124 04:03:19.934787 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/kube-rbac-proxy-frr/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.082335 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/frr/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.110232 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/reloader/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.169448 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fd7b6df9d-zhx85_1e5b7120-c263-41a3-9360-d8c132d6235c/manager/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.269826 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f9db7bdcf-9p4fp_0e9057ec-4b8c-4cea-a3d3-52001b746757/webhook-server/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.347494 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6krl4_d6566aa5-3d44-45c0-94f7-6c618ac3b626/kube-rbac-proxy/0.log" Jan 24 04:03:20 crc kubenswrapper[4772]: I0124 04:03:20.532473 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6krl4_d6566aa5-3d44-45c0-94f7-6c618ac3b626/speaker/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.397622 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.584203 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.584845 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.612660 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.746212 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.751179 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.776663 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/extract/0.log" Jan 24 04:03:46 crc kubenswrapper[4772]: I0124 04:03:46.909707 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.104152 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.108164 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.125879 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.266269 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.275970 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.505761 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.674715 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.676268 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/registry-server/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.708455 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.708486 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.927583 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:03:47 crc kubenswrapper[4772]: I0124 04:03:47.957261 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.095185 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v4b5k_b86459ea-dd22-4a69-8561-379087f99c80/marketplace-operator/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.181288 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.303676 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/registry-server/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.413057 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.414135 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.428364 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.576086 4772 scope.go:117] "RemoveContainer" containerID="ddd731f930a2b93869cb24ef6f583c580cde489f5243c8c8949edd6d85cde6da" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.581166 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.583249 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.595065 4772 scope.go:117] "RemoveContainer" containerID="a068e3b8329a5e9203fac1e2ec28103747deb4ae4cc17c1dcbbc99d0c321caad" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.610886 4772 scope.go:117] "RemoveContainer" containerID="e59ccb7a812ec91fae49516f8b35078b655ca2bab8dd521d6c004931bc3b659a" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.689370 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/registry-server/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.748342 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.941011 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.961112 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:03:48 crc kubenswrapper[4772]: I0124 04:03:48.961140 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:03:49 crc kubenswrapper[4772]: I0124 04:03:49.109203 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:03:49 crc kubenswrapper[4772]: I0124 04:03:49.136267 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:03:49 crc kubenswrapper[4772]: I0124 04:03:49.360814 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/registry-server/0.log" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.659116 4772 scope.go:117] "RemoveContainer" containerID="807e5df6c31404ad5bba5bd01a9d401b7eac60a61b4b5b2a0043502a1f7a64ba" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.705892 4772 scope.go:117] "RemoveContainer" containerID="51262daab598d8d99d1ed0b86c92fee3a2564ac0bf43c8568e396e92e166596c" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.734542 4772 scope.go:117] "RemoveContainer" containerID="6dbe08d1fdf95f619b6d5eb3bbdf49ca812b94f9fbf4cd383235bdd69c47d8f0" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.763138 4772 scope.go:117] "RemoveContainer" containerID="c1db88fd6d07d58f2995e381d5bc0e65190652ab95d820eb6b828d4536a87c47" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.787914 4772 scope.go:117] "RemoveContainer" containerID="92966d56542571126fbb411be9dbef7ba1e9ecef51dd799126f265e9b9d7d747" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.810194 4772 scope.go:117] "RemoveContainer" containerID="2d2f210905fe34861e38aedb8735c3f3cdbae413d570911efd29331064b21ca7" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.827016 4772 scope.go:117] "RemoveContainer" containerID="1a377a7b1447aca3f61c5d5633f5115ce7307736fbbefe32e06c778e6a5ad521" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.844857 4772 scope.go:117] "RemoveContainer" containerID="8cf8b01fb0d8f029213c3444df095a25f2cf2f968c4d427c76d548c3be3d4564" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.865560 4772 scope.go:117] "RemoveContainer" containerID="30363292f5d3e9746a022fb1f9ce2ad8fd9a21054d682a442db9836887a862d9" Jan 24 04:04:48 crc kubenswrapper[4772]: I0124 04:04:48.912278 4772 scope.go:117] "RemoveContainer" containerID="97e7af0331bfc3d9b793e20539f3caae6e638a88abfd8d69b891348f996c31c7" Jan 24 04:05:05 crc kubenswrapper[4772]: I0124 04:05:05.454545 4772 generic.go:334] "Generic (PLEG): container finished" podID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerID="003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13" exitCode=0 Jan 24 04:05:05 crc kubenswrapper[4772]: I0124 04:05:05.454621 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkmfc/must-gather-s4sds" event={"ID":"717cef35-4d54-45f6-8fe2-20a0260eca6a","Type":"ContainerDied","Data":"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13"} Jan 24 04:05:05 crc kubenswrapper[4772]: I0124 04:05:05.459404 4772 scope.go:117] "RemoveContainer" containerID="003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13" Jan 24 04:05:06 crc kubenswrapper[4772]: I0124 04:05:06.105667 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wkmfc_must-gather-s4sds_717cef35-4d54-45f6-8fe2-20a0260eca6a/gather/0.log" Jan 24 04:05:13 crc kubenswrapper[4772]: I0124 04:05:13.570643 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wkmfc/must-gather-s4sds"] Jan 24 04:05:13 crc kubenswrapper[4772]: I0124 04:05:13.571363 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wkmfc/must-gather-s4sds" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="copy" containerID="cri-o://d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048" gracePeriod=2 Jan 24 04:05:13 crc kubenswrapper[4772]: I0124 04:05:13.575709 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wkmfc/must-gather-s4sds"] Jan 24 04:05:13 crc kubenswrapper[4772]: I0124 04:05:13.950135 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wkmfc_must-gather-s4sds_717cef35-4d54-45f6-8fe2-20a0260eca6a/copy/0.log" Jan 24 04:05:13 crc kubenswrapper[4772]: I0124 04:05:13.951094 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.066251 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output\") pod \"717cef35-4d54-45f6-8fe2-20a0260eca6a\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.066468 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sl6d\" (UniqueName: \"kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d\") pod \"717cef35-4d54-45f6-8fe2-20a0260eca6a\" (UID: \"717cef35-4d54-45f6-8fe2-20a0260eca6a\") " Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.076072 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d" (OuterVolumeSpecName: "kube-api-access-5sl6d") pod "717cef35-4d54-45f6-8fe2-20a0260eca6a" (UID: "717cef35-4d54-45f6-8fe2-20a0260eca6a"). InnerVolumeSpecName "kube-api-access-5sl6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.129571 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "717cef35-4d54-45f6-8fe2-20a0260eca6a" (UID: "717cef35-4d54-45f6-8fe2-20a0260eca6a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.167726 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sl6d\" (UniqueName: \"kubernetes.io/projected/717cef35-4d54-45f6-8fe2-20a0260eca6a-kube-api-access-5sl6d\") on node \"crc\" DevicePath \"\"" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.167784 4772 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/717cef35-4d54-45f6-8fe2-20a0260eca6a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.522573 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wkmfc_must-gather-s4sds_717cef35-4d54-45f6-8fe2-20a0260eca6a/copy/0.log" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.523976 4772 generic.go:334] "Generic (PLEG): container finished" podID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerID="d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048" exitCode=143 Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.524017 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkmfc/must-gather-s4sds" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.524049 4772 scope.go:117] "RemoveContainer" containerID="d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.554347 4772 scope.go:117] "RemoveContainer" containerID="003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.613992 4772 scope.go:117] "RemoveContainer" containerID="d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048" Jan 24 04:05:14 crc kubenswrapper[4772]: E0124 04:05:14.614677 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048\": container with ID starting with d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048 not found: ID does not exist" containerID="d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.614730 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048"} err="failed to get container status \"d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048\": rpc error: code = NotFound desc = could not find container \"d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048\": container with ID starting with d5e122a1a770c61b5cffc2e9757d8c01dc1a3249f63300155d57ad12829b4048 not found: ID does not exist" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.614798 4772 scope.go:117] "RemoveContainer" containerID="003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13" Jan 24 04:05:14 crc kubenswrapper[4772]: E0124 04:05:14.615422 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13\": container with ID starting with 003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13 not found: ID does not exist" containerID="003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13" Jan 24 04:05:14 crc kubenswrapper[4772]: I0124 04:05:14.615452 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13"} err="failed to get container status \"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13\": rpc error: code = NotFound desc = could not find container \"003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13\": container with ID starting with 003336bdb65db75a28ba05c138b8806084e62697ae86c359cb0675a2d6747e13 not found: ID does not exist" Jan 24 04:05:15 crc kubenswrapper[4772]: I0124 04:05:15.669435 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" path="/var/lib/kubelet/pods/717cef35-4d54-45f6-8fe2-20a0260eca6a/volumes" Jan 24 04:05:16 crc kubenswrapper[4772]: I0124 04:05:16.899773 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:05:16 crc kubenswrapper[4772]: I0124 04:05:16.900529 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:05:46 crc kubenswrapper[4772]: I0124 04:05:46.899997 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:05:46 crc kubenswrapper[4772]: I0124 04:05:46.900846 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:06:16 crc kubenswrapper[4772]: I0124 04:06:16.900266 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:06:16 crc kubenswrapper[4772]: I0124 04:06:16.901222 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:06:16 crc kubenswrapper[4772]: I0124 04:06:16.901338 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 04:06:16 crc kubenswrapper[4772]: I0124 04:06:16.902584 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 04:06:16 crc kubenswrapper[4772]: I0124 04:06:16.902675 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207" gracePeriod=600 Jan 24 04:06:17 crc kubenswrapper[4772]: I0124 04:06:17.032584 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207" exitCode=0 Jan 24 04:06:17 crc kubenswrapper[4772]: I0124 04:06:17.032712 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207"} Jan 24 04:06:17 crc kubenswrapper[4772]: I0124 04:06:17.033291 4772 scope.go:117] "RemoveContainer" containerID="8bcdebca1206d8fdd16116f29a35ce352cd252b575a61af374af1a09608438c8" Jan 24 04:06:18 crc kubenswrapper[4772]: I0124 04:06:18.044971 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerStarted","Data":"f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490"} Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.204443 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:44 crc kubenswrapper[4772]: E0124 04:06:44.205604 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="gather" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.205628 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="gather" Jan 24 04:06:44 crc kubenswrapper[4772]: E0124 04:06:44.205665 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="copy" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.205679 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="copy" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.205889 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="copy" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.205921 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="717cef35-4d54-45f6-8fe2-20a0260eca6a" containerName="gather" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.207469 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.217405 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.222956 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.223036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2khtp\" (UniqueName: \"kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.223080 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.324197 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.324494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2khtp\" (UniqueName: \"kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.324635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.324894 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.325173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.352794 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2khtp\" (UniqueName: \"kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp\") pod \"certified-operators-bw2n5\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.535357 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:44 crc kubenswrapper[4772]: I0124 04:06:44.839944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:45 crc kubenswrapper[4772]: I0124 04:06:45.277466 4772 generic.go:334] "Generic (PLEG): container finished" podID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerID="734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056" exitCode=0 Jan 24 04:06:45 crc kubenswrapper[4772]: I0124 04:06:45.278103 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerDied","Data":"734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056"} Jan 24 04:06:45 crc kubenswrapper[4772]: I0124 04:06:45.278238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerStarted","Data":"2fbc4fc83dd793c87777cbc6a4c155a8779d314b55ffd62befa78e9006bf40ff"} Jan 24 04:06:45 crc kubenswrapper[4772]: I0124 04:06:45.280342 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 04:06:46 crc kubenswrapper[4772]: I0124 04:06:46.298094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerStarted","Data":"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2"} Jan 24 04:06:47 crc kubenswrapper[4772]: I0124 04:06:47.309331 4772 generic.go:334] "Generic (PLEG): container finished" podID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerID="8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2" exitCode=0 Jan 24 04:06:47 crc kubenswrapper[4772]: I0124 04:06:47.309395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerDied","Data":"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2"} Jan 24 04:06:48 crc kubenswrapper[4772]: I0124 04:06:48.320777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerStarted","Data":"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be"} Jan 24 04:06:48 crc kubenswrapper[4772]: I0124 04:06:48.350675 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bw2n5" podStartSLOduration=1.8596821289999999 podStartE2EDuration="4.350648865s" podCreationTimestamp="2026-01-24 04:06:44 +0000 UTC" firstStartedPulling="2026-01-24 04:06:45.279722878 +0000 UTC m=+1502.316813633" lastFinishedPulling="2026-01-24 04:06:47.770689644 +0000 UTC m=+1504.807780369" observedRunningTime="2026-01-24 04:06:48.345366108 +0000 UTC m=+1505.382456863" watchObservedRunningTime="2026-01-24 04:06:48.350648865 +0000 UTC m=+1505.387739630" Jan 24 04:06:54 crc kubenswrapper[4772]: I0124 04:06:54.536610 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:54 crc kubenswrapper[4772]: I0124 04:06:54.537351 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:54 crc kubenswrapper[4772]: I0124 04:06:54.596340 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:55 crc kubenswrapper[4772]: I0124 04:06:55.425530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:55 crc kubenswrapper[4772]: I0124 04:06:55.494117 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:57 crc kubenswrapper[4772]: I0124 04:06:57.385148 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bw2n5" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="registry-server" containerID="cri-o://ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be" gracePeriod=2 Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.335454 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.386160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2khtp\" (UniqueName: \"kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp\") pod \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.391508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp" (OuterVolumeSpecName: "kube-api-access-2khtp") pod "6f6f05c3-f75b-4954-a4fd-994e05fddf24" (UID: "6f6f05c3-f75b-4954-a4fd-994e05fddf24"). InnerVolumeSpecName "kube-api-access-2khtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.394521 4772 generic.go:334] "Generic (PLEG): container finished" podID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerID="ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be" exitCode=0 Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.394567 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerDied","Data":"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be"} Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.394600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw2n5" event={"ID":"6f6f05c3-f75b-4954-a4fd-994e05fddf24","Type":"ContainerDied","Data":"2fbc4fc83dd793c87777cbc6a4c155a8779d314b55ffd62befa78e9006bf40ff"} Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.394609 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw2n5" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.394631 4772 scope.go:117] "RemoveContainer" containerID="ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.426291 4772 scope.go:117] "RemoveContainer" containerID="8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.450728 4772 scope.go:117] "RemoveContainer" containerID="734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.468815 4772 scope.go:117] "RemoveContainer" containerID="ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be" Jan 24 04:06:58 crc kubenswrapper[4772]: E0124 04:06:58.469444 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be\": container with ID starting with ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be not found: ID does not exist" containerID="ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.469519 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be"} err="failed to get container status \"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be\": rpc error: code = NotFound desc = could not find container \"ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be\": container with ID starting with ea8494e505f1a341b51442e15555ee16e460dafe6d8827e1b84d4854eb4114be not found: ID does not exist" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.469578 4772 scope.go:117] "RemoveContainer" containerID="8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2" Jan 24 04:06:58 crc kubenswrapper[4772]: E0124 04:06:58.470074 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2\": container with ID starting with 8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2 not found: ID does not exist" containerID="8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.470103 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2"} err="failed to get container status \"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2\": rpc error: code = NotFound desc = could not find container \"8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2\": container with ID starting with 8c42374e90bc3e3d7ee81b0042e65451cfc911d8e7100c035b4d5fa6900761b2 not found: ID does not exist" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.470123 4772 scope.go:117] "RemoveContainer" containerID="734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056" Jan 24 04:06:58 crc kubenswrapper[4772]: E0124 04:06:58.470567 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056\": container with ID starting with 734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056 not found: ID does not exist" containerID="734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.470602 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056"} err="failed to get container status \"734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056\": rpc error: code = NotFound desc = could not find container \"734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056\": container with ID starting with 734cc121e7dd5dc691c2877ca00734ea5a34f4429c3f3641f7965611942c0056 not found: ID does not exist" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.487301 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content\") pod \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.487362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities\") pod \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\" (UID: \"6f6f05c3-f75b-4954-a4fd-994e05fddf24\") " Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.487582 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2khtp\" (UniqueName: \"kubernetes.io/projected/6f6f05c3-f75b-4954-a4fd-994e05fddf24-kube-api-access-2khtp\") on node \"crc\" DevicePath \"\"" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.488295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities" (OuterVolumeSpecName: "utilities") pod "6f6f05c3-f75b-4954-a4fd-994e05fddf24" (UID: "6f6f05c3-f75b-4954-a4fd-994e05fddf24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.550994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f6f05c3-f75b-4954-a4fd-994e05fddf24" (UID: "6f6f05c3-f75b-4954-a4fd-994e05fddf24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.589121 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.589186 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6f05c3-f75b-4954-a4fd-994e05fddf24-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.738147 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:58 crc kubenswrapper[4772]: I0124 04:06:58.742627 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bw2n5"] Jan 24 04:06:59 crc kubenswrapper[4772]: I0124 04:06:59.673167 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" path="/var/lib/kubelet/pods/6f6f05c3-f75b-4954-a4fd-994e05fddf24/volumes" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.309991 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:23 crc kubenswrapper[4772]: E0124 04:07:23.310727 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="extract-utilities" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.310755 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="extract-utilities" Jan 24 04:07:23 crc kubenswrapper[4772]: E0124 04:07:23.310770 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="registry-server" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.310776 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="registry-server" Jan 24 04:07:23 crc kubenswrapper[4772]: E0124 04:07:23.310785 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="extract-content" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.310791 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="extract-content" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.310882 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f05c3-f75b-4954-a4fd-994e05fddf24" containerName="registry-server" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.311671 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.326233 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.480021 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.480077 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.480119 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffq62\" (UniqueName: \"kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.581706 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffq62\" (UniqueName: \"kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.581791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.581824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.582214 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.582376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.614363 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffq62\" (UniqueName: \"kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62\") pod \"community-operators-4nhkt\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:23 crc kubenswrapper[4772]: I0124 04:07:23.631149 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:24 crc kubenswrapper[4772]: I0124 04:07:24.127355 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:24 crc kubenswrapper[4772]: I0124 04:07:24.595386 4772 generic.go:334] "Generic (PLEG): container finished" podID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerID="1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884" exitCode=0 Jan 24 04:07:24 crc kubenswrapper[4772]: I0124 04:07:24.595549 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerDied","Data":"1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884"} Jan 24 04:07:24 crc kubenswrapper[4772]: I0124 04:07:24.595870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerStarted","Data":"f200b8ac59d2b40407302c8002c0f452a80df3e689270f8b2ee6ab62e6e0462b"} Jan 24 04:07:26 crc kubenswrapper[4772]: I0124 04:07:26.610496 4772 generic.go:334] "Generic (PLEG): container finished" podID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerID="859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e" exitCode=0 Jan 24 04:07:26 crc kubenswrapper[4772]: I0124 04:07:26.610562 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerDied","Data":"859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e"} Jan 24 04:07:27 crc kubenswrapper[4772]: I0124 04:07:27.622997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerStarted","Data":"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6"} Jan 24 04:07:27 crc kubenswrapper[4772]: I0124 04:07:27.650194 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nhkt" podStartSLOduration=1.8641085020000001 podStartE2EDuration="4.650174269s" podCreationTimestamp="2026-01-24 04:07:23 +0000 UTC" firstStartedPulling="2026-01-24 04:07:24.598198897 +0000 UTC m=+1541.635289632" lastFinishedPulling="2026-01-24 04:07:27.384264644 +0000 UTC m=+1544.421355399" observedRunningTime="2026-01-24 04:07:27.649528121 +0000 UTC m=+1544.686618916" watchObservedRunningTime="2026-01-24 04:07:27.650174269 +0000 UTC m=+1544.687265004" Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.838809 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.840010 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.852114 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.977373 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.977419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:29 crc kubenswrapper[4772]: I0124 04:07:29.977780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmn4g\" (UniqueName: \"kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.078502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.078547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.078613 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmn4g\" (UniqueName: \"kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.079167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.079196 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.113423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmn4g\" (UniqueName: \"kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g\") pod \"redhat-operators-rn624\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.165597 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:30 crc kubenswrapper[4772]: I0124 04:07:30.644239 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:31 crc kubenswrapper[4772]: I0124 04:07:31.656009 4772 generic.go:334] "Generic (PLEG): container finished" podID="68290319-c301-492c-a2cb-414f6336efe2" containerID="dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b" exitCode=0 Jan 24 04:07:31 crc kubenswrapper[4772]: I0124 04:07:31.656131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerDied","Data":"dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b"} Jan 24 04:07:31 crc kubenswrapper[4772]: I0124 04:07:31.659130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerStarted","Data":"dc6d6d63e1e1b1a12e12b21d4fa01d463c7e5b32e9702b292fca1114642c19b0"} Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.632268 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.632912 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.681766 4772 generic.go:334] "Generic (PLEG): container finished" podID="68290319-c301-492c-a2cb-414f6336efe2" containerID="a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79" exitCode=0 Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.681811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerDied","Data":"a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79"} Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.689935 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:33 crc kubenswrapper[4772]: I0124 04:07:33.759479 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:34 crc kubenswrapper[4772]: I0124 04:07:34.692933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerStarted","Data":"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900"} Jan 24 04:07:34 crc kubenswrapper[4772]: I0124 04:07:34.880694 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rn624" podStartSLOduration=3.331704732 podStartE2EDuration="5.880668342s" podCreationTimestamp="2026-01-24 04:07:29 +0000 UTC" firstStartedPulling="2026-01-24 04:07:31.659285894 +0000 UTC m=+1548.696376659" lastFinishedPulling="2026-01-24 04:07:34.208249534 +0000 UTC m=+1551.245340269" observedRunningTime="2026-01-24 04:07:34.718009313 +0000 UTC m=+1551.755100118" watchObservedRunningTime="2026-01-24 04:07:34.880668342 +0000 UTC m=+1551.917759097" Jan 24 04:07:34 crc kubenswrapper[4772]: I0124 04:07:34.883319 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:35 crc kubenswrapper[4772]: I0124 04:07:35.699401 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nhkt" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="registry-server" containerID="cri-o://1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6" gracePeriod=2 Jan 24 04:07:35 crc kubenswrapper[4772]: E0124 04:07:35.770497 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d17fc66_ed5b_47f6_819a_7c518e6fd546.slice/crio-1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6.scope\": RecentStats: unable to find data in memory cache]" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.069172 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.181891 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities\") pod \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.182236 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffq62\" (UniqueName: \"kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62\") pod \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.182375 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content\") pod \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\" (UID: \"9d17fc66-ed5b-47f6-819a-7c518e6fd546\") " Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.182852 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities" (OuterVolumeSpecName: "utilities") pod "9d17fc66-ed5b-47f6-819a-7c518e6fd546" (UID: "9d17fc66-ed5b-47f6-819a-7c518e6fd546"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.186989 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62" (OuterVolumeSpecName: "kube-api-access-ffq62") pod "9d17fc66-ed5b-47f6-819a-7c518e6fd546" (UID: "9d17fc66-ed5b-47f6-819a-7c518e6fd546"). InnerVolumeSpecName "kube-api-access-ffq62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.235424 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d17fc66-ed5b-47f6-819a-7c518e6fd546" (UID: "9d17fc66-ed5b-47f6-819a-7c518e6fd546"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.284420 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.284455 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffq62\" (UniqueName: \"kubernetes.io/projected/9d17fc66-ed5b-47f6-819a-7c518e6fd546-kube-api-access-ffq62\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.284466 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d17fc66-ed5b-47f6-819a-7c518e6fd546-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.709545 4772 generic.go:334] "Generic (PLEG): container finished" podID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerID="1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6" exitCode=0 Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.709588 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerDied","Data":"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6"} Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.709615 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nhkt" event={"ID":"9d17fc66-ed5b-47f6-819a-7c518e6fd546","Type":"ContainerDied","Data":"f200b8ac59d2b40407302c8002c0f452a80df3e689270f8b2ee6ab62e6e0462b"} Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.709633 4772 scope.go:117] "RemoveContainer" containerID="1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.709712 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nhkt" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.732709 4772 scope.go:117] "RemoveContainer" containerID="859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.748590 4772 scope.go:117] "RemoveContainer" containerID="1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.761907 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.765416 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nhkt"] Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.806451 4772 scope.go:117] "RemoveContainer" containerID="1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6" Jan 24 04:07:36 crc kubenswrapper[4772]: E0124 04:07:36.807007 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6\": container with ID starting with 1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6 not found: ID does not exist" containerID="1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.807054 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6"} err="failed to get container status \"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6\": rpc error: code = NotFound desc = could not find container \"1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6\": container with ID starting with 1e70d803605c51832866a2a9fabad48c357b9c28c04f06106f3129be213c34c6 not found: ID does not exist" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.807078 4772 scope.go:117] "RemoveContainer" containerID="859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e" Jan 24 04:07:36 crc kubenswrapper[4772]: E0124 04:07:36.807565 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e\": container with ID starting with 859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e not found: ID does not exist" containerID="859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.807586 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e"} err="failed to get container status \"859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e\": rpc error: code = NotFound desc = could not find container \"859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e\": container with ID starting with 859d4dc0801cd69ff5c71a9a821eb86fad50556f82ba40edfb71c74f4668cf9e not found: ID does not exist" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.807600 4772 scope.go:117] "RemoveContainer" containerID="1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884" Jan 24 04:07:36 crc kubenswrapper[4772]: E0124 04:07:36.807926 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884\": container with ID starting with 1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884 not found: ID does not exist" containerID="1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884" Jan 24 04:07:36 crc kubenswrapper[4772]: I0124 04:07:36.807963 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884"} err="failed to get container status \"1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884\": rpc error: code = NotFound desc = could not find container \"1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884\": container with ID starting with 1af294338692a389ebc9b926a417a0fb2ed86875548de5c945b3109b47a52884 not found: ID does not exist" Jan 24 04:07:37 crc kubenswrapper[4772]: I0124 04:07:37.682896 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" path="/var/lib/kubelet/pods/9d17fc66-ed5b-47f6-819a-7c518e6fd546/volumes" Jan 24 04:07:40 crc kubenswrapper[4772]: I0124 04:07:40.166073 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:40 crc kubenswrapper[4772]: I0124 04:07:40.166466 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:41 crc kubenswrapper[4772]: I0124 04:07:41.219311 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rn624" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="registry-server" probeResult="failure" output=< Jan 24 04:07:41 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Jan 24 04:07:41 crc kubenswrapper[4772]: > Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.854811 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l2l6p/must-gather-9cb86"] Jan 24 04:07:47 crc kubenswrapper[4772]: E0124 04:07:47.856308 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="extract-content" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.856333 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="extract-content" Jan 24 04:07:47 crc kubenswrapper[4772]: E0124 04:07:47.856368 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="extract-utilities" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.856383 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="extract-utilities" Jan 24 04:07:47 crc kubenswrapper[4772]: E0124 04:07:47.856396 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="registry-server" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.856412 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="registry-server" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.856597 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d17fc66-ed5b-47f6-819a-7c518e6fd546" containerName="registry-server" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.857719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.869509 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l2l6p"/"kube-root-ca.crt" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.869862 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l2l6p"/"default-dockercfg-hjpfj" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.870125 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l2l6p"/"openshift-service-ca.crt" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.879886 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l2l6p/must-gather-9cb86"] Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.958534 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvh8t\" (UniqueName: \"kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:47 crc kubenswrapper[4772]: I0124 04:07:47.958642 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.060152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.060227 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvh8t\" (UniqueName: \"kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.060725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.081157 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvh8t\" (UniqueName: \"kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t\") pod \"must-gather-9cb86\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.181945 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.384263 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l2l6p/must-gather-9cb86"] Jan 24 04:07:48 crc kubenswrapper[4772]: I0124 04:07:48.819579 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l6p/must-gather-9cb86" event={"ID":"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a","Type":"ContainerStarted","Data":"c88b253cb8c3ed003ef26fdcce586606c712eff86251f195a7e8c3d79d45b23e"} Jan 24 04:07:49 crc kubenswrapper[4772]: I0124 04:07:49.067873 4772 scope.go:117] "RemoveContainer" containerID="7166289c9d9c0aa48229b8c924e78c98e3670c5ae79b3c78203f97b25849bc8c" Jan 24 04:07:49 crc kubenswrapper[4772]: I0124 04:07:49.826167 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l6p/must-gather-9cb86" event={"ID":"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a","Type":"ContainerStarted","Data":"8053752a46e12ea78898c9e57ac813301e887388f83a96500ca96a2e764e6963"} Jan 24 04:07:49 crc kubenswrapper[4772]: I0124 04:07:49.826212 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l6p/must-gather-9cb86" event={"ID":"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a","Type":"ContainerStarted","Data":"b6a39eb23ba2bee0694a6f6ab056bb8baeca2901b9adc2e6d6458eba8dfdf87f"} Jan 24 04:07:49 crc kubenswrapper[4772]: I0124 04:07:49.845026 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l2l6p/must-gather-9cb86" podStartSLOduration=2.845003354 podStartE2EDuration="2.845003354s" podCreationTimestamp="2026-01-24 04:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 04:07:49.840438016 +0000 UTC m=+1566.877528751" watchObservedRunningTime="2026-01-24 04:07:49.845003354 +0000 UTC m=+1566.882094079" Jan 24 04:07:50 crc kubenswrapper[4772]: I0124 04:07:50.208035 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:50 crc kubenswrapper[4772]: I0124 04:07:50.254298 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:50 crc kubenswrapper[4772]: I0124 04:07:50.451167 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:51 crc kubenswrapper[4772]: I0124 04:07:51.836594 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rn624" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="registry-server" containerID="cri-o://d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900" gracePeriod=2 Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.240480 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.431649 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content\") pod \"68290319-c301-492c-a2cb-414f6336efe2\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.431725 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities\") pod \"68290319-c301-492c-a2cb-414f6336efe2\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.431776 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmn4g\" (UniqueName: \"kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g\") pod \"68290319-c301-492c-a2cb-414f6336efe2\" (UID: \"68290319-c301-492c-a2cb-414f6336efe2\") " Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.433303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities" (OuterVolumeSpecName: "utilities") pod "68290319-c301-492c-a2cb-414f6336efe2" (UID: "68290319-c301-492c-a2cb-414f6336efe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.444016 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g" (OuterVolumeSpecName: "kube-api-access-wmn4g") pod "68290319-c301-492c-a2cb-414f6336efe2" (UID: "68290319-c301-492c-a2cb-414f6336efe2"). InnerVolumeSpecName "kube-api-access-wmn4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.534027 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.534080 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmn4g\" (UniqueName: \"kubernetes.io/projected/68290319-c301-492c-a2cb-414f6336efe2-kube-api-access-wmn4g\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.542132 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68290319-c301-492c-a2cb-414f6336efe2" (UID: "68290319-c301-492c-a2cb-414f6336efe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.635018 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68290319-c301-492c-a2cb-414f6336efe2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.882967 4772 generic.go:334] "Generic (PLEG): container finished" podID="68290319-c301-492c-a2cb-414f6336efe2" containerID="d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900" exitCode=0 Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.883027 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn624" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.883025 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerDied","Data":"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900"} Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.883160 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn624" event={"ID":"68290319-c301-492c-a2cb-414f6336efe2","Type":"ContainerDied","Data":"dc6d6d63e1e1b1a12e12b21d4fa01d463c7e5b32e9702b292fca1114642c19b0"} Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.883179 4772 scope.go:117] "RemoveContainer" containerID="d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.908969 4772 scope.go:117] "RemoveContainer" containerID="a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.922357 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.930150 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rn624"] Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.959966 4772 scope.go:117] "RemoveContainer" containerID="dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.991840 4772 scope.go:117] "RemoveContainer" containerID="d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900" Jan 24 04:07:52 crc kubenswrapper[4772]: E0124 04:07:52.992383 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900\": container with ID starting with d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900 not found: ID does not exist" containerID="d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.992441 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900"} err="failed to get container status \"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900\": rpc error: code = NotFound desc = could not find container \"d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900\": container with ID starting with d339d97f60295aa7134af6c4eae05ffdb11f99c64ebad735a3f766e950d99900 not found: ID does not exist" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.992465 4772 scope.go:117] "RemoveContainer" containerID="a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79" Jan 24 04:07:52 crc kubenswrapper[4772]: E0124 04:07:52.992878 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79\": container with ID starting with a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79 not found: ID does not exist" containerID="a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.992921 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79"} err="failed to get container status \"a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79\": rpc error: code = NotFound desc = could not find container \"a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79\": container with ID starting with a51e72339a249e9d603dac382233ffa32a8b45d9a0b9d9d59831edbad5fa5c79 not found: ID does not exist" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.992949 4772 scope.go:117] "RemoveContainer" containerID="dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b" Jan 24 04:07:52 crc kubenswrapper[4772]: E0124 04:07:52.993288 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b\": container with ID starting with dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b not found: ID does not exist" containerID="dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b" Jan 24 04:07:52 crc kubenswrapper[4772]: I0124 04:07:52.993315 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b"} err="failed to get container status \"dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b\": rpc error: code = NotFound desc = could not find container \"dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b\": container with ID starting with dc8ac0879011482b78a8d9edb4c9a62c1b0a862fffd3044b3bd56f310256373b not found: ID does not exist" Jan 24 04:07:53 crc kubenswrapper[4772]: I0124 04:07:53.667023 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68290319-c301-492c-a2cb-414f6336efe2" path="/var/lib/kubelet/pods/68290319-c301-492c-a2cb-414f6336efe2/volumes" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.685293 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:11 crc kubenswrapper[4772]: E0124 04:08:11.686383 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="extract-utilities" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.686408 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="extract-utilities" Jan 24 04:08:11 crc kubenswrapper[4772]: E0124 04:08:11.686427 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="registry-server" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.686442 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="registry-server" Jan 24 04:08:11 crc kubenswrapper[4772]: E0124 04:08:11.686492 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="extract-content" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.686506 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="extract-content" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.686799 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="68290319-c301-492c-a2cb-414f6336efe2" containerName="registry-server" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.688242 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.704284 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.833621 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.833700 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.833728 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrhtt\" (UniqueName: \"kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.935251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrhtt\" (UniqueName: \"kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.935371 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.935424 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.936115 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.936118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:11 crc kubenswrapper[4772]: I0124 04:08:11.962670 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrhtt\" (UniqueName: \"kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt\") pod \"redhat-marketplace-tqbj4\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:12 crc kubenswrapper[4772]: I0124 04:08:12.047867 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:12 crc kubenswrapper[4772]: I0124 04:08:12.513679 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:12 crc kubenswrapper[4772]: W0124 04:08:12.516396 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b346f39_44d4_416e_8ad1_de5d9faa7072.slice/crio-9bbff58053e7b46982c6cb8f09fb9715156d0baed331a0b7d7311ab5285ab2d0 WatchSource:0}: Error finding container 9bbff58053e7b46982c6cb8f09fb9715156d0baed331a0b7d7311ab5285ab2d0: Status 404 returned error can't find the container with id 9bbff58053e7b46982c6cb8f09fb9715156d0baed331a0b7d7311ab5285ab2d0 Jan 24 04:08:13 crc kubenswrapper[4772]: I0124 04:08:13.017298 4772 generic.go:334] "Generic (PLEG): container finished" podID="2b346f39-44d4-416e-8ad1-de5d9faa7072" containerID="d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c" exitCode=0 Jan 24 04:08:13 crc kubenswrapper[4772]: I0124 04:08:13.017420 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerDied","Data":"d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c"} Jan 24 04:08:13 crc kubenswrapper[4772]: I0124 04:08:13.017624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerStarted","Data":"9bbff58053e7b46982c6cb8f09fb9715156d0baed331a0b7d7311ab5285ab2d0"} Jan 24 04:08:14 crc kubenswrapper[4772]: I0124 04:08:14.026924 4772 generic.go:334] "Generic (PLEG): container finished" podID="2b346f39-44d4-416e-8ad1-de5d9faa7072" containerID="5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b" exitCode=0 Jan 24 04:08:14 crc kubenswrapper[4772]: I0124 04:08:14.027036 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerDied","Data":"5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b"} Jan 24 04:08:15 crc kubenswrapper[4772]: I0124 04:08:15.037100 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerStarted","Data":"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad"} Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.048811 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.049383 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.104433 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.134295 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tqbj4" podStartSLOduration=9.669275423 podStartE2EDuration="11.134276615s" podCreationTimestamp="2026-01-24 04:08:11 +0000 UTC" firstStartedPulling="2026-01-24 04:08:13.019512748 +0000 UTC m=+1590.056603503" lastFinishedPulling="2026-01-24 04:08:14.48451396 +0000 UTC m=+1591.521604695" observedRunningTime="2026-01-24 04:08:15.063904948 +0000 UTC m=+1592.100995673" watchObservedRunningTime="2026-01-24 04:08:22.134276615 +0000 UTC m=+1599.171367350" Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.152117 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:22 crc kubenswrapper[4772]: I0124 04:08:22.345013 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:24 crc kubenswrapper[4772]: I0124 04:08:24.090809 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tqbj4" podUID="2b346f39-44d4-416e-8ad1-de5d9faa7072" containerName="registry-server" containerID="cri-o://5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad" gracePeriod=2 Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.011684 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.100843 4772 generic.go:334] "Generic (PLEG): container finished" podID="2b346f39-44d4-416e-8ad1-de5d9faa7072" containerID="5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad" exitCode=0 Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.100902 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerDied","Data":"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad"} Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.100949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqbj4" event={"ID":"2b346f39-44d4-416e-8ad1-de5d9faa7072","Type":"ContainerDied","Data":"9bbff58053e7b46982c6cb8f09fb9715156d0baed331a0b7d7311ab5285ab2d0"} Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.100972 4772 scope.go:117] "RemoveContainer" containerID="5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.100979 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqbj4" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.128918 4772 scope.go:117] "RemoveContainer" containerID="5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.132942 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrhtt\" (UniqueName: \"kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt\") pod \"2b346f39-44d4-416e-8ad1-de5d9faa7072\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.133070 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content\") pod \"2b346f39-44d4-416e-8ad1-de5d9faa7072\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.133152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities\") pod \"2b346f39-44d4-416e-8ad1-de5d9faa7072\" (UID: \"2b346f39-44d4-416e-8ad1-de5d9faa7072\") " Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.134820 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities" (OuterVolumeSpecName: "utilities") pod "2b346f39-44d4-416e-8ad1-de5d9faa7072" (UID: "2b346f39-44d4-416e-8ad1-de5d9faa7072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.144871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt" (OuterVolumeSpecName: "kube-api-access-lrhtt") pod "2b346f39-44d4-416e-8ad1-de5d9faa7072" (UID: "2b346f39-44d4-416e-8ad1-de5d9faa7072"). InnerVolumeSpecName "kube-api-access-lrhtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.151877 4772 scope.go:117] "RemoveContainer" containerID="d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.160954 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b346f39-44d4-416e-8ad1-de5d9faa7072" (UID: "2b346f39-44d4-416e-8ad1-de5d9faa7072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.182418 4772 scope.go:117] "RemoveContainer" containerID="5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad" Jan 24 04:08:25 crc kubenswrapper[4772]: E0124 04:08:25.182884 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad\": container with ID starting with 5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad not found: ID does not exist" containerID="5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.182918 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad"} err="failed to get container status \"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad\": rpc error: code = NotFound desc = could not find container \"5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad\": container with ID starting with 5a6dde36aa12aedd47e028749df19eb921d0595323bd1ded88a256854e30ecad not found: ID does not exist" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.182949 4772 scope.go:117] "RemoveContainer" containerID="5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b" Jan 24 04:08:25 crc kubenswrapper[4772]: E0124 04:08:25.183224 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b\": container with ID starting with 5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b not found: ID does not exist" containerID="5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.183241 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b"} err="failed to get container status \"5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b\": rpc error: code = NotFound desc = could not find container \"5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b\": container with ID starting with 5b81dd63dc7ce9a1f3200f8676bcccb4f8320e0c372574c13c4f953ed5d10b5b not found: ID does not exist" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.183254 4772 scope.go:117] "RemoveContainer" containerID="d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c" Jan 24 04:08:25 crc kubenswrapper[4772]: E0124 04:08:25.183439 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c\": container with ID starting with d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c not found: ID does not exist" containerID="d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.183482 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c"} err="failed to get container status \"d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c\": rpc error: code = NotFound desc = could not find container \"d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c\": container with ID starting with d3fb749cc429094c07b32422a39fa67543151553da474afb82958cbe4d755e1c not found: ID does not exist" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.235051 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.235271 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b346f39-44d4-416e-8ad1-de5d9faa7072-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.235314 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrhtt\" (UniqueName: \"kubernetes.io/projected/2b346f39-44d4-416e-8ad1-de5d9faa7072-kube-api-access-lrhtt\") on node \"crc\" DevicePath \"\"" Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.431382 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.435623 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqbj4"] Jan 24 04:08:25 crc kubenswrapper[4772]: I0124 04:08:25.698433 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b346f39-44d4-416e-8ad1-de5d9faa7072" path="/var/lib/kubelet/pods/2b346f39-44d4-416e-8ad1-de5d9faa7072/volumes" Jan 24 04:08:45 crc kubenswrapper[4772]: I0124 04:08:45.233558 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f88sl_e9bc8517-d223-42cf-9c6d-5e96dbc58e27/control-plane-machine-set-operator/0.log" Jan 24 04:08:45 crc kubenswrapper[4772]: I0124 04:08:45.443131 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lpsdw_62148cbe-9135-4627-8b05-05f8f4465d20/machine-api-operator/0.log" Jan 24 04:08:45 crc kubenswrapper[4772]: I0124 04:08:45.448662 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lpsdw_62148cbe-9135-4627-8b05-05f8f4465d20/kube-rbac-proxy/0.log" Jan 24 04:08:46 crc kubenswrapper[4772]: I0124 04:08:46.900341 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:08:46 crc kubenswrapper[4772]: I0124 04:08:46.900703 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.035998 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8skfz_af65200d-34f2-478a-9c7b-48c6eb982b11/kube-rbac-proxy/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.068366 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8skfz_af65200d-34f2-478a-9c7b-48c6eb982b11/controller/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.198862 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vs95l_57fd94bd-5a17-4006-8495-fb020332ba40/frr-k8s-webhook-server/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.264672 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.476765 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.500432 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.551037 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.584421 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.658684 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.671284 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.714678 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.780579 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.928181 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-reloader/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.929295 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-metrics/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.958469 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/cp-frr-files/0.log" Jan 24 04:09:14 crc kubenswrapper[4772]: I0124 04:09:14.962610 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/controller/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.126690 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/kube-rbac-proxy/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.130880 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/kube-rbac-proxy-frr/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.141350 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/frr-metrics/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.330692 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/reloader/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.369568 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fd7b6df9d-zhx85_1e5b7120-c263-41a3-9360-d8c132d6235c/manager/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.413754 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpkk4_e4d8b2b6-95a4-4566-9c4c-d38b90c2eea7/frr/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.506025 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f9db7bdcf-9p4fp_0e9057ec-4b8c-4cea-a3d3-52001b746757/webhook-server/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.575713 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6krl4_d6566aa5-3d44-45c0-94f7-6c618ac3b626/kube-rbac-proxy/0.log" Jan 24 04:09:15 crc kubenswrapper[4772]: I0124 04:09:15.777450 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6krl4_d6566aa5-3d44-45c0-94f7-6c618ac3b626/speaker/0.log" Jan 24 04:09:16 crc kubenswrapper[4772]: I0124 04:09:16.899674 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:09:16 crc kubenswrapper[4772]: I0124 04:09:16.899750 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.259291 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.391647 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.429498 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.440297 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.606616 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/util/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.637392 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/pull/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.684463 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lmpd_721069d6-d930-4768-8902-ccdbcde32201/extract/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.783045 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.943490 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.959009 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:09:42 crc kubenswrapper[4772]: I0124 04:09:42.966193 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.117715 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-utilities/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.143405 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/extract-content/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.323722 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.514677 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.535679 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qmzlq_e9a42743-0d64-48b8-bcb1-ed7ce83864a9/registry-server/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.545206 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.545433 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.758955 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-content/0.log" Jan 24 04:09:43 crc kubenswrapper[4772]: I0124 04:09:43.768844 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/extract-utilities/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.021723 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v4b5k_b86459ea-dd22-4a69-8561-379087f99c80/marketplace-operator/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.044820 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.088254 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b4m4h_0f6ffb4d-0069-4480-a6e5-b2e8d2a85521/registry-server/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.382379 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.410000 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.439625 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.588534 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-utilities/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.666760 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/extract-content/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.709623 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rmps9_0727c90d-0963-4183-8a91-b8e1b8b85b14/registry-server/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.806486 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:09:44 crc kubenswrapper[4772]: I0124 04:09:44.989899 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:09:45 crc kubenswrapper[4772]: I0124 04:09:45.021900 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:09:45 crc kubenswrapper[4772]: I0124 04:09:45.029103 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:09:45 crc kubenswrapper[4772]: I0124 04:09:45.225318 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-content/0.log" Jan 24 04:09:45 crc kubenswrapper[4772]: I0124 04:09:45.227469 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/extract-utilities/0.log" Jan 24 04:09:45 crc kubenswrapper[4772]: I0124 04:09:45.457131 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dk99t_f3bdea06-126d-4d1a-aeac-3b620cccaa0d/registry-server/0.log" Jan 24 04:09:46 crc kubenswrapper[4772]: I0124 04:09:46.900141 4772 patch_prober.go:28] interesting pod/machine-config-daemon-bnn82 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 04:09:46 crc kubenswrapper[4772]: I0124 04:09:46.900223 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 04:09:46 crc kubenswrapper[4772]: I0124 04:09:46.900305 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" Jan 24 04:09:46 crc kubenswrapper[4772]: I0124 04:09:46.900878 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490"} pod="openshift-machine-config-operator/machine-config-daemon-bnn82" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 04:09:46 crc kubenswrapper[4772]: I0124 04:09:46.900938 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerName="machine-config-daemon" containerID="cri-o://f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" gracePeriod=600 Jan 24 04:09:47 crc kubenswrapper[4772]: E0124 04:09:47.044423 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:09:47 crc kubenswrapper[4772]: I0124 04:09:47.605860 4772 generic.go:334] "Generic (PLEG): container finished" podID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" exitCode=0 Jan 24 04:09:47 crc kubenswrapper[4772]: I0124 04:09:47.605908 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" event={"ID":"60ea55cf-a32f-46c5-9ad8-dec5dbc808b0","Type":"ContainerDied","Data":"f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490"} Jan 24 04:09:47 crc kubenswrapper[4772]: I0124 04:09:47.605945 4772 scope.go:117] "RemoveContainer" containerID="367ef2e272ab723a6a06c17ef1c2326ab806530cacb8836f168e267336d36207" Jan 24 04:09:47 crc kubenswrapper[4772]: I0124 04:09:47.606380 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:09:47 crc kubenswrapper[4772]: E0124 04:09:47.606684 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:00 crc kubenswrapper[4772]: I0124 04:10:00.658309 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:10:00 crc kubenswrapper[4772]: E0124 04:10:00.658947 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:12 crc kubenswrapper[4772]: I0124 04:10:12.658822 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:10:12 crc kubenswrapper[4772]: E0124 04:10:12.659906 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:25 crc kubenswrapper[4772]: I0124 04:10:25.659404 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:10:25 crc kubenswrapper[4772]: E0124 04:10:25.660511 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:37 crc kubenswrapper[4772]: I0124 04:10:37.658874 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:10:37 crc kubenswrapper[4772]: E0124 04:10:37.659669 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:52 crc kubenswrapper[4772]: I0124 04:10:52.658960 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:10:52 crc kubenswrapper[4772]: E0124 04:10:52.659925 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:10:59 crc kubenswrapper[4772]: I0124 04:10:59.133003 4772 generic.go:334] "Generic (PLEG): container finished" podID="2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" containerID="b6a39eb23ba2bee0694a6f6ab056bb8baeca2901b9adc2e6d6458eba8dfdf87f" exitCode=0 Jan 24 04:10:59 crc kubenswrapper[4772]: I0124 04:10:59.133068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l2l6p/must-gather-9cb86" event={"ID":"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a","Type":"ContainerDied","Data":"b6a39eb23ba2bee0694a6f6ab056bb8baeca2901b9adc2e6d6458eba8dfdf87f"} Jan 24 04:10:59 crc kubenswrapper[4772]: I0124 04:10:59.134406 4772 scope.go:117] "RemoveContainer" containerID="b6a39eb23ba2bee0694a6f6ab056bb8baeca2901b9adc2e6d6458eba8dfdf87f" Jan 24 04:11:00 crc kubenswrapper[4772]: I0124 04:11:00.132048 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l6p_must-gather-9cb86_2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a/gather/0.log" Jan 24 04:11:07 crc kubenswrapper[4772]: I0124 04:11:07.659600 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:11:07 crc kubenswrapper[4772]: E0124 04:11:07.660348 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:11:09 crc kubenswrapper[4772]: I0124 04:11:09.879436 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l2l6p/must-gather-9cb86"] Jan 24 04:11:09 crc kubenswrapper[4772]: I0124 04:11:09.882564 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l2l6p/must-gather-9cb86" podUID="2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" containerName="copy" containerID="cri-o://8053752a46e12ea78898c9e57ac813301e887388f83a96500ca96a2e764e6963" gracePeriod=2 Jan 24 04:11:09 crc kubenswrapper[4772]: I0124 04:11:09.889906 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l2l6p/must-gather-9cb86"] Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.210211 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l6p_must-gather-9cb86_2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a/copy/0.log" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.211567 4772 generic.go:334] "Generic (PLEG): container finished" podID="2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" containerID="8053752a46e12ea78898c9e57ac813301e887388f83a96500ca96a2e764e6963" exitCode=143 Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.211630 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88b253cb8c3ed003ef26fdcce586606c712eff86251f195a7e8c3d79d45b23e" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.224049 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l2l6p_must-gather-9cb86_2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a/copy/0.log" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.224673 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.420459 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output\") pod \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.420510 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvh8t\" (UniqueName: \"kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t\") pod \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\" (UID: \"2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a\") " Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.437323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t" (OuterVolumeSpecName: "kube-api-access-qvh8t") pod "2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" (UID: "2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a"). InnerVolumeSpecName "kube-api-access-qvh8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.496369 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" (UID: "2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.522246 4772 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 04:11:10 crc kubenswrapper[4772]: I0124 04:11:10.522299 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvh8t\" (UniqueName: \"kubernetes.io/projected/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a-kube-api-access-qvh8t\") on node \"crc\" DevicePath \"\"" Jan 24 04:11:11 crc kubenswrapper[4772]: I0124 04:11:11.217617 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l2l6p/must-gather-9cb86" Jan 24 04:11:11 crc kubenswrapper[4772]: I0124 04:11:11.665876 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a" path="/var/lib/kubelet/pods/2a09e6bb-72ad-46bf-a4f3-82a0ef5b7d0a/volumes" Jan 24 04:11:19 crc kubenswrapper[4772]: I0124 04:11:19.660186 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:11:19 crc kubenswrapper[4772]: E0124 04:11:19.661052 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:11:33 crc kubenswrapper[4772]: I0124 04:11:33.665445 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:11:33 crc kubenswrapper[4772]: E0124 04:11:33.666701 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:11:47 crc kubenswrapper[4772]: I0124 04:11:47.659342 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:11:47 crc kubenswrapper[4772]: E0124 04:11:47.660201 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:12:01 crc kubenswrapper[4772]: I0124 04:12:01.659918 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:12:01 crc kubenswrapper[4772]: E0124 04:12:01.661020 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:12:15 crc kubenswrapper[4772]: I0124 04:12:15.659283 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:12:15 crc kubenswrapper[4772]: E0124 04:12:15.660139 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:12:29 crc kubenswrapper[4772]: I0124 04:12:29.658877 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:12:29 crc kubenswrapper[4772]: E0124 04:12:29.660646 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:12:43 crc kubenswrapper[4772]: I0124 04:12:43.666814 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:12:43 crc kubenswrapper[4772]: E0124 04:12:43.667579 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:12:57 crc kubenswrapper[4772]: I0124 04:12:57.663217 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:12:57 crc kubenswrapper[4772]: E0124 04:12:57.664723 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:13:09 crc kubenswrapper[4772]: I0124 04:13:09.658925 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:13:09 crc kubenswrapper[4772]: E0124 04:13:09.660204 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:13:24 crc kubenswrapper[4772]: I0124 04:13:24.660134 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:13:24 crc kubenswrapper[4772]: E0124 04:13:24.661256 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0" Jan 24 04:13:38 crc kubenswrapper[4772]: I0124 04:13:38.659132 4772 scope.go:117] "RemoveContainer" containerID="f9cdba11f7eb74292a5acd4431e6881361d8ca1e0f61f0b7c6fc203d30cc3490" Jan 24 04:13:38 crc kubenswrapper[4772]: E0124 04:13:38.660290 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnn82_openshift-machine-config-operator(60ea55cf-a32f-46c5-9ad8-dec5dbc808b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnn82" podUID="60ea55cf-a32f-46c5-9ad8-dec5dbc808b0"